- Review
- Open Access

# Stochastic Hybrid Systems in Cellular Neuroscience

- Paul C. Bressloff
^{1}Email author and - James N. Maclaurin
^{1}

**8**:12

https://doi.org/10.1186/s13408-018-0067-7

© The Author(s) 2018

**Received:**9 January 2018**Accepted:**5 August 2018**Published:**22 August 2018

## Abstract

We review recent work on the theory and applications of stochastic hybrid systems in cellular neuroscience. A stochastic hybrid system or piecewise deterministic Markov process involves the coupling between a piecewise deterministic differential equation and a time-homogeneous Markov chain on some discrete space. The latter typically represents some random switching process. We begin by summarizing the basic theory of stochastic hybrid systems, including various approximation schemes in the fast switching (weak noise) limit. In subsequent sections, we consider various applications of stochastic hybrid systems, including stochastic ion channels and membrane voltage fluctuations, stochastic gap junctions and diffusion in randomly switching environments, and intracellular transport in axons and dendrites. Finally, we describe recent work on phase reduction methods for stochastic hybrid limit cycle oscillators.

## 1 Introduction

There are a growing number of problems in cell biology that involve the coupling between a piecewise deterministic differential equation and a time-homogeneous Markov chain on some discrete space *Γ*, resulting in a stochastic hybrid system, also known as a piecewise deterministic Markov process (PDMP) [37]. Typically, the phase space of the dynamical system is taken to be \(\mathbb {R}^{d}\) for finite *d*. One important example at the single-cell level is the occurrence of membrane voltage fluctuations in neurons due to the stochastic opening and closing of ion channels [2, 25, 30, 32, 54, 64, 80, 109, 114, 117]. Here the discrete states of the ion channels evolve according to a continuous-time Markov process with voltage-dependent transition rates and, in-between discrete jumps in the ion channel states, the membrane voltage evolves according to a deterministic equation that depends on the current state of the ion channels. In the limit that the number of ion channels goes to infinity, we can apply the law of large numbers and recover classical Hodgkin–Huxley-type equations. However, finite-size effects can result in the noise-induced spontaneous firing of a neuron due to channel fluctuations. Another important example is a gene regulatory network, where the continuous variable is the concentration of a protein product, and the discrete variable represents the activation state of the gene [79, 83, 108, 110, 131]. Stochastic switching between active and inactive gene states can allow a gene regulatory network to switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level, then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. A third example occurs within the context of motor-driven intracellular transport [23]. One often finds that motor-cargo complexes randomly switch between different velocity states such as anterograde versus retrograde motion, which can be modeled in terms of a special type of stochastic hybrid system known as a velocity jump process.

In many of the examples mentioned, we find that the transition rates between the discrete states \(n\in\varGamma\) are much faster than the relaxation rates of the piecewise deterministic dynamics for \(x\in \mathbb {R}^{d}\). Thus, there is a separation of time-scales between the discrete and continuous processes, so that if *t* is the characteristic time-scale of the relaxation dynamics, then *εt* is the characteristic time-scale of the Markov chain for some small positive parameter *ε*. Assuming that the Markov chain is ergodic, in the limit \(\varepsilon \rightarrow0\), we obtain a deterministic dynamical system in which one averages the piecewise dynamics with respect to the corresponding unique stationary measure. This then raises the important problem of characterizing how the law of the underlying stochastic process approaches this deterministic limit in the case of weak noise, \(0<\varepsilon \ll1\).

The notion of a stochastic hybrid system can also be extended to piecewise deterministic partial differential equations (PDEs), that is, infinite-dimensional dynamical systems. One example concerns molecular diffusion in cellular and subcellular domains with randomly switching exterior or interior boundaries [12, 17–19, 92]. The latter are generated by the random opening and closing of gates (ion channels or gap junctions) within the plasma membrane. In this case, we have a diffusion equation with boundary conditions that depend on the current discrete states of the gates; the particle concentration thus evolves piecewise, in between the opening or closing of a gate. One way to analyze these stochastic hybrid PDEs is to discretize space using finite-differences (method of lines) so that we have a standard PDMP on a finite-dimensional space. Diffusion in randomly switching environments also has applications to the branched network of tracheal tubes forming the passive respiration system in insects [18, 92] and volume neurotransmission [90].

This tutorial review develops the theory and applications of stochastic hybrid systems within the context of cellular neuroscience. A complementary review that mainly considers gene regulatory networks can be found elsewhere [14]. In Sect. 2, we summarize the basic theory of stochastic hybrid systems, In subsequent sections, we consider various applications of stochastic hybrid systems, including stochastic ion channels and membrane voltage fluctuations (Sect. 3), stochastic gap junctions and diffusion in randomly switching environments (Sect. 4), and intracellular transport in axons and dendrites (Sect. 5). Finally, in Sect. 6, we present recent work on phase reduction methods for stochastic hybrid limit cycle oscillators.

## 2 Stochastic Hybrid Systems

In this section, we review the basic theory of stochastic hybrid systems. We start with the notion of a piecewise deterministic differential equation, which can be used to generate sample paths of the stochastic process. We then describe how the probability distribution of sample paths can be determined by solving a differential Chapman–Kolmogorov (CK) equation (Sect. 2.1). In many applications, including the stochastic ion channel models of Sect. 3, there is a separation of time-scales between a fast \(O(1/\varepsilon )\) switching process and a slow \(O(1)\) continuous dynamics. In the fast switching limit \(\varepsilon \rightarrow0\), we obtain a deterministic dynamical system. In Sect. 2.2, we use an asymptotic expansion in *ε* to show how the CK equation can be approximated by the Fokker–Planck (FP) equation with an \(O(\varepsilon )\) diffusion term (Sect. 2.2). Finally, in Sect. 2.3, we consider methods for analyzing escape problems in stochastic hybrid systems. We assume that the deterministic system is bistable so that, in the absence of noise, the long-time stable state of the system depends on the initial conditions. On the other hand, for finite switching rates, the resulting fluctuations can induce transitions between the metastable states. In the case of weak noise (fast switching \(0 <\varepsilon \ll1\)), transitions are rare events involving large fluctuations that are in the tails of the underlying probability density function. This means that estimates of mean first passage times (MFPTs) and other statistical quantities can develop exponentially large errors under the diffusion approximation. We describe a more accurate method for calculating MFPTs based on a WKB analysis.

*x*is a continuous variable in a connected bounded domain \(\varSigma\subset \mathbb {R}^{d}\) with regular boundary

*∂Ω*, and

*n*is a discrete stochastic variable taking values in the finite set \(\varGamma\equiv\{ 0,\ldots,N_{0}-1\}\). (It is possible to have a set of discrete variables, although we can always relabel the internal states so that they are effectively indexed by a single integer. We can also consider generalizations of the continuous process, in which the ODE (2.1) is replaced by a stochastic differential equation (SDE) or even a partial differential equation (PDE). To allow for such possibilities, we will refer to all of these processes as examples of a stochastic hybrid system.) When the internal state is

*n*, the system evolves according to the ordinary differential equation (ODE)

*Σ*, there exists a positive constant \(K_{n}\) such that

*x*is confined to the domain

*Σ*so that existence and uniqueness of a trajectory holds for each

*n*. For fixed

*x*, the discrete stochastic variable evolves according to a homogeneous continuous-time Markov chain with transition matrix \(\mathbf{W}(x)\) and corresponding generator \(\mathbf{A}(x)\), which are related according to

*x*, there is a nonzero probability of transitioning, possibly in more than one step, from any state to any other state of the Markov chain. This implies the existence of a unique invariant probability distribution on

*Γ*for fixed \(x\in\varSigma \), denoted by \(\rho(x)\), such that

*x*. Hence \(\lambda_{m}(x)\) determines the jump times from the state

*m*, whereas \(P_{nm}(x)\) determines the probability distribution that when it jumps, the new state is

*n*for \(n\neq m\). The hybrid evolution of the system with respect to \(x(t)\) and \(n(t)\) can then be described as follows; see Fig. 1. Suppose the system starts at time zero in the state \((x_{0}, n_{0})\). Call \(x_{0}(t)\) the solution of (2.1) with \(n=n_{0}\) such that \(x_{0}(0)=x_{0}\). Let \(t_{1}\) be the random variable (stopping time) such that

### 2.1 Chapman–Kolmogorov Equation

*t*, \(t>0\), given the initial conditions \(X(0)=x_{0}\), \(N(0)=n_{0}\). Introduce the probability density \(p_{n}(x,t|x_{0},n_{0},0) \) with

*p*evolves according to the forward differential Chapman–Kolmogorov (CK) equation [10, 61]

*n*, whereas the second term represents jumps in the discrete state

*n*. Note that we have rescaled the matrix

**A**by introducing the dimensionless parameter \(\varepsilon>0\). This is motivated by the observation that one often finds a separation of time-scales between the relaxation time for the dynamics of the continuous variable

*x*and the rate of switching between the different discrete states

*n*. The fast switching limit then corresponds to the case \(\varepsilon \rightarrow0\). Let us now define the averaged vector field \(\overline {F}: \mathbb {R}^{d} \to \mathbb {R}^{d}\) by

*ε*, the Markov chain undergoes many jumps over a small time interval

*Δt*during which \(\varDelta x\approx0\), and thus the relative frequency of each discrete state

*n*is approximately \(p_{n}^{*}(x)\). This can be made precise in terms of a law of large numbers for stochastic hybrid systems [51, 84].

*m*, \(1\leq m \leq N_{0}-1\), such that \(F_{n}(0)=0\) for \(0\leq n \leq m-1\) and \(F_{n}(L)=0\) for \(m\leq n\leq N_{0}-1\). No-flux boundary conditions at the ends \(x=0,L\) take the form \(J(0,t)=J(L,t)=0\) with

*Σ*, that is,

*c*. The reflecting boundary conditions imply that \(c=0\). Since \(F_{n}(x)\) is nonzero for all \(x\in\varSigma\), we can express \(p_{1}(x)\) in terms of \(p_{0}(x)\):

*Z*exists.

### 2.2 Quasi-Steady-State (QSS) Diffusion Approximation

For small but nonzero *ε*, we can use perturbation theory to derive lowest order corrections to the deterministic mean field equation, which leads to the Langevin equation with noise amplitude \(O(\sqrt{\varepsilon})\). More specifically, perturbations of the mean-field equation (2.8) can be analyzed using a quasi-steady-state (QSS) diffusion or adiabatic approximation, in which the CK equation (2.6) is approximated by the Fokker–Planck (FP) equation for the total density \(C(x,t)=\sum_{n} p_{n}(x,t)\). The QSS approximation was first developed from a probabilistic perspective by Papanicolaou [119]. It has subsequently been applied to a wide range of problems in biology, including models of intracellular transport in axons [57, 123] and dendrites [111–113] and bacterial chemotaxis [73, 74, 116]. There have also been more recent probabilistic treatments of the adiabatic limit, which have been applied to various stochastic neuron models [118]. Finally, note that it is also possible to obtain a diffusion limit by taking the number of discrete states \(N_{0}\) to be large [30, 117].

The basic steps of the QSS reduction are as follows:

*x*, and \(\sum_{n} w_{n}(x,t)=0\). Substituting into equation (2.6) yields

*n*then gives

*C*and the fact that \(\mathbf{A}(x)\rho(x) = 0\), we have

**A**. We typically have to determine the pseudoinverse of

**A**numerically.

*C*evolves according to the Itô Fokker–Planck (FP) equation

**A**. However, in the special case of a two-state discrete process (\(n=0,1\)), we have the explicit solution

One subtle point is the nature of boundary conditions under the QSS reduction, since the FP equation is a second-order parabolic PDE, whereas the original CK equation is an \(N_{0}\)th-order hyperbolic PDE. It follows that, for \(N_{0}>2\), there is a mismatch in the number of boundary conditions between the CK and FP equations. This implies that the QSS reduction may break down in a small neighborhood of the boundary, as reflected by the existence of boundary layers [152]. One way to eliminate the existence of boundary layers is to ensure that the boundary conditions of the CK equation are compatible with the QSS reduction.

### 2.3 Metastability in Stochastic Hybrid Systems

Several examples of stochastic hybrid systems are known to exhibit multistability in the fast-switching limit \(\varepsilon \rightarrow0\) [14]. That is, the deterministic equation (2.8) supports more than one stable equilibrium. In the absence of noise, the particular state of the system is determined by initial conditions. On the other hand, when noise is included by taking into account the stochastic switching, fluctuations can induce transitions between the metastable states. If the noise is weak (fast switching \(0 <\varepsilon \ll1\)), then transitions are rare events involving large fluctuations that are in the tails of the underlying probability density function. This means that estimates of mean transition times and other statistical quantities can be sensitive to any approximations, including the Gaussian approximation based on the QSS approximation of Sect. 2.3, and can sometimes lead to exponentially large errors.

The analysis of metastability has a long history [70], particularly within the context of SDEs with weak noise. The underlying idea is that the mean rate to transition from a metastable state in the weak noise limit can be identified with the principal eigenvalue of the generator of the underlying stochastic process, which is a second-order differential operator in the case of a Fokker–Planck equation. Calculating the eigenvalue typically involves obtaining a Wentzel–Kramers–Brillouin (WKB) approximation of a quasistationary solution and then using singular perturbation theory to match the solution to an absorbing boundary condition [69, 97, 99, 103, 130]. The latter is defined on the boundary that marks the region beyond which the system rapidly relaxes to another metastable state, becomes extinct, or escapes to infinity. In one-dimensional systems (\(d=1\)), this boundary is simply an unstable fixed point, whereas in higher-dimensions (\(d>1\)), it is generically a \((d-1)\)-submanifold. In the weak noise limit, the most likely paths of escape through an absorbing boundary are rare events, occurring in the tails of the associated functional probability distribution. From a mathematical perspective, the rigorous analysis of the tails of a distribution is known as large deviation theory [39, 53, 55, 138], which provides a rigorous probabilistic framework for interpreting the WKB solution in terms of optimal fluctuational paths. The analysis of metastability in chemical master equations has been developed along analogous lines to SDEs, combining WKB methods and large deviation principles[43, 45, 49, 53, 69, 75, 85, 124] with path-integral or operator methods [40, 41, 121, 128, 143]. The study of metastability in stochastic hybrid systems is more recent, and much of the theory has been developed in a series of papers on stochastic ion channels [25, 109, 114, 115], gene networks [108, 110], and stochastic neural networks [24]. Again there is a strong connection between WKB methods, large deviation principles [15, 51, 84], and formal path-integral methods [11, 26], although the connection is now more subtle.

*y*is in a neighborhood of \(x_{-}\), and \(\rho_{n}(y)\) is the stationary distribution of the switching process. Let \(T(y)\) denote the (stochastic) first passage time for which the system first reaches \(x_{0}\), given that it started at

*y*. The distribution of first passage times \(f(t,y)\) is related to the survival probability that the system has not yet reached \(x_{0}\):

*ε*, the MFPT has an

*Arrhenius-like*form analogous to SDEs [69]:

*quasipotential*or stochastic potential, and

*Γ*is a prefactor. One important observation is that the escape time is exponentially sensitive to the precise form of

*Φ*. If we were first to carry out the QSS reduction of Sect. 2.3 and then use a standard analysis of the one-dimensional FP equation in order to estimate the MFPT [61], then we would find that \(\varGamma=1\) and, to \(O(1)\),

*x*, then \(\varPhi(x)=U(x)/D\) with \(U(x)\) the deterministic potential. The escape time then depends on the barrier height

*ΔE*shown in Fig. 2. As we have already commented, the Gaussian approximation may not accurately capture the statistics of rare events that dominate noise-induced escape. This is reflected by the observation that \(\varPhi_{\mathrm{QSS}}(x)\) can differ significantly from the true quasipotential. A much better estimate can be obtained using WKB.

*n*. We also assume that the eigenvalues \(\lambda_{\varepsilon }^{(r)}\) all have positive definite real parts and the smallest eigenvalue \(\lambda_{\varepsilon}^{(0)}\) is real and simple, so that we can introduce the ordering \(0<\lambda _{\varepsilon }^{(0)}<\operatorname{Re}[\lambda_{\varepsilon }^{(1)}]\leq\operatorname {Re}[\lambda_{\varepsilon }^{(2)}]\leq\cdots\). The exponentially slow rate of escape through \(x_{0}\) in the weak-noise limit means that \(\lambda_{\varepsilon }^{(0)}\) is exponentially small, \(\lambda _{\varepsilon }^{(0)}\sim\mathrm{e}^{-C/\varepsilon }\), whereas \(\operatorname {Re}[\lambda_{\varepsilon }^{(r)}]\) is only weakly dependent on

*ε*for \(r\geq1\). Under these assumptions, we have the

*quasistationary*approximation for large

*t*:

*ρ*is the unique right eigenvector of

**A**, for which \(\varPhi_{0}'=0\). Establishing the existence of a nontrivial positive solution requires more work and is related to the fact that the connection of the WKB solution to optimal fluctuational paths and large deviation principles is less direct in the case of stochastic hybrid systems.

*x*, we can use the Perron–Frobenius theorem (see the end of Sect. 2.3) to show that, for fixed \((x,q)\), there exists a unique eigenvalue \(\varLambda_{0}(x, q)\) with a positive eigenvector \(R_{n}^{(0)}(x, q)\). The optimal fluctuational paths are obtained by identifying the Perron eigenvalue \(\varLambda_{0}(x, q)\) as a Hamiltonian and finding zero energy solutions to Hamilton’s equations

#### 2.3.1 Calculation of Principal Eigenvalue

*x*and WKB potential \(\varPhi_{0}\), the matrix operator

*S*of

*Ā*. That is, we have the solvability condition

*S*satisfying

*ε*,

*x*, and \({\mathscr {N}}\) is the normalization factor,

#### 2.3.2 Two-State Model

*x*. The resulting quasipotential differs significantly from the one obtained by carrying out a QSS diffusion approximation of the stochastic hybrid system along the lines outlined in Sect. 2.2.

**S**satisfy

#### 2.3.3 Fredholm Alternative Theorem

*M*-dimensional linear inhomogeneous equation \(\mathbf{A}\mathbf{z}=\mathbf{b}\) with \(\mathbf{z},\mathbf{b}\in{\mathbb {R}}^{M}\). Suppose that the \(M\times M\) matrix

**A**has a nontrivial null-space and let

**u**be a null vector of the adjoint matrix \(\mathbf{A}^{\dagger}\), that is, \(\mathbf{A}^{\dagger}\mathbf{u}=0\). The Fredholm alternative theorem for finite-dimensional vector spaces states that the inhomogeneous equation has a (nonunique) solution for

**z**if and only if \(\mathbf{u}\cdot \mathbf{b}=0\) for all null vectors

**u**. Let us apply this theorem to equation (2.18) for fixed

*x*,

*t*. The one-dimensional null-space is spanned by the vector with components \(u_{n}=1\), since \(\sum_{n}u_{n}A_{nm}=\sum_{n}A^{\dagger}_{mn}u_{n}=0\). Hence equation (2.18) has a solution, provided that

*x*.

#### 2.3.4 Perron–Frobenius Theorem

**T**is an irreducible positive finite matrix, then

- 1.
there is a simple eigenvalue \(\lambda_{0}\) of

**T**that is real and positive, with positive left and right eigenvectors; - 2.
the remaining eigenvalues

*λ*satisfy \(|\lambda|<\lambda_{0}\).

**W**is an irreducible transition matrix, then the left positive eigenvector is \(\psi=(1,\ldots,1)\), and the right positive eigenvector is the stationary distribution

*ρ*. In the case of the matrix operator \(\mathbf{L}(x)\) with components \(L_{nm}(x):=A_{nm}(x)+qF_{n}(x)\delta _{n,m}\), which appears in the eigenvalue equation (2.39), it is clear that not all components of the matrix are positive for a given \(x\in\varSigma\). However, taking \(\zeta>\sup_{x\in\varSigma}\|\mathbf{L}(x)\|_{\infty}\), the matrix \(\mathbf{L}(x)+\zeta\mathbf{I}\) satisfies the conditions of the Perron–Frobenius theorem for all \(x\in\varSigma\).

## 3 Stochastic Ion Channels and Membrane Voltage Fluctuations

^{+}and Ca

^{2+}outside the cell and a higher concentration of K

^{+}inside the cell. The membrane current through a specific channel varies approximately linearly with changes in the voltage

*v*relative to some equilibrium or reversal potential, which is the potential at which there is a balance between the opposing effects of diffusion and electrical forces. (We will focus on a space-clamped model of a neuron whose cell body is taken to be an isopotential.) Summing over all channel types, the total membrane current (flow of positive ions) leaving the cell through the cell membrane is

*s*, and \(V_{s}\) is the corresponding reversal potential.

Recordings of the current flowing through single channels indicate that channels fluctuate rapidly between open and closed states in a stochastic fashion. Nevertheless, most models of a neuron use deterministic descriptions of conductance changes, under the assumption that there are a large number of approximately independent channels of each type. It then follows from the law of large numbers that the fraction of channels open at any given time is approximately equal to the probability that any one channel is in an open state. The conductance \(g_{s}\) for ion channels of type *s* is thus taken to be the product \(g_{s}=\bar{g}_{s} P_{s}\) where \(\bar{g}_{s}\) is equal to the density of channels in the membrane multiplied by the conductance of a single channel, and \(P_{s}\) is the fraction of open channels. The voltage-dependence of the probabilities \(P_{s}\) in the case of a delayed-rectifier K^{+} current and a fast Na^{+} current were originally obtained by Hodgkin and Huxley [76] as part of their Nobel prize winning work on the generation of action potentials in the squid giant axon. The delayed-rectifier K^{+} current is responsible for terminating an action potential by repolarizing a neuron. We find that opening of the K^{+} channel requires structural changes in four identical and independent subunits so that \(P_{\mathrm{K}} = n^{4}\) where *n* is the probability that any one gate subunit has opened. In the case of the fast Na^{+} current, which is responsible for the rapid depolarization of a cell leading to action potential generation, the probability of an open channel takes the form \(P_{\mathrm{Na}}=m^{3} h\) where \(m^{3}\) is the probability that an activating gate is open and *h* is the probability that an inactivating gate is open. Depolarization causes *m* to increase and *h* to decrease, whereas hyperpolarization has the opposite effect.

*m*,

*n*,

*h*are usually formulated in terms of a simple kinetic scheme that describes voltage-dependent transitions of each gating subunit between open and closed states. More specifically, for each \(Y \in\{m,n,h \}\),

*v*:

### 3.1 Morris–Lecar Model

It is often convenient to consider a simplified planar model of a neuron, which tracks the membrane voltage *v*, and a recovery variable *w* that represents the fraction of open potassium channels. The advantage of a two-dimensional model is that we can use phase-plane analysis to develop a geometric picture of neuronal spiking. One well-known example is the Morris–Lecar (ML) model [100]. Although this model was originally developed to model Ca^{2+} spikes in molluscs, it has been widely used to study neural excitability for Na^{+} spikes [48], since it exhibits many of the same bifurcation scenarios as more complex models. The ML model has also been used to investigate subthreshold membrane potential oscillations (STOs) due to persistent Na^{+} currents [27, 145]. Another advantage of the ML model is that it is straightforward to incorporate intrinsic channel noise [80, 109, 114, 132]. To capture the fluctuations in membrane potential from stochastic switching in voltage-gated ion channels, we will consider a stochastic version of the ML model that includes both discrete jump processes (to represent the opening and closing of Ca^{2+} or Na^{+} ion channels) and a two-dimensional continuous-time piecewise process (to represent the membrane potential and recovery variable *w*). We thus have an explicit example of a two-dimensional PDMP. (We can also consider fluctuations in the opening and closing of the K^{+} ion channels, in which *w* is replaced by an additional discrete stochastic variable, representing the fraction of open K^{+} channels [114, 132]. This would yield a one-dimensional PDMP for the voltage alone.)

#### 3.1.1 Deterministic Model

^{2+}), a slow outward potassium current (K

^{+}), a leak current (L), and an applied current (\(I_{\mathrm{app}}\)). (In [80, 114] the inward current is interpreted as a Na

^{+}current, but the same parameter values as the original ML model are used.) For simplicity, each ion channel is treated as a two-state system that switches between an open and a closed state—the more detailed subunit structure of ion channels is neglected [64]. The membrane voltage

*v*evolves as

*w*is the K

^{+}gating variable. It is assumed that Ca

^{2+}channels are in quasi-steady state \(a_{\infty}(v)\), thus eliminating the fraction of open Ca

^{2+}channels as a variable. For \(i=\mathrm{K},\mathrm{Ca},{\mathrm{L}}\), let \(f_{i}=g_{i}(V_{i}-v)\), where \(g_{i}\) are ion conductances, and \(V_{i}\) are reversal potentials. Opening and closing rates of ion channels depend only on membrane potential

*v*are represented by

*α*and

*β*, respectively, so that

^{+}dynamics is much slower than the voltage and Ca

^{2+}dynamics, we can use a slow/fast analysis to investigate the initiation of an action potential following a perturbing stimulus [81]. The ML model can also support oscillatory solutions; see also Sect. 6.

#### 3.1.2 Stochastic Model

*N*of Ca

^{2+}channels is taken to be relatively small. (For simplicity, we ignore fluctuations in the K

^{+}channels by taking the number of the latter to be very large.) Let \(n(t)\) be the number of open Ca

^{2+}channels at time

*t*, which means that there are \(N-n(t)\) closed channels. The voltage and recovery variables then evolve according to the following PDMP:

*ε*to reflect the fact that Ca

^{2+}channels open and close much faster than the relaxation dynamics of the system \((v,w)\). This is consistent with the parameter values of the ML model, where the slowness of the K

^{+}channels is reflected by the fact that the parameter \(\phi =0.04~\mbox{ms}^{-1}\), the membrane rate constant is of order \(0.05~\mbox{ms}^{-1}\), whereas the transition rates of Ca

^{2+}or Na

^{+}channels are of order \(1~\mbox{ms}^{-1}\). The stationary density of the birth–death process is

**A**is the tridiagonal generator matrix of the birth–death process. Carrying out the QSS diffusion approximation of Sect. 2.2 then yields the following Ito FP equation for \(C(v,w,t)=\sum_{n=0}^{N}p_{n}(v,w,t)\) (see also [27]):

^{2+}or Na

^{+}channels. The slow K

^{+}channels are assumed to be frozen, so that they effectively act as a leak current, and each sodium channel is treated as a single activating subunit. The recovery variable

*w*is thus fixed so the potassium current can be absorbed into the function \(g(v):=-[wf_{\mathrm{K}}(v)+f_{\mathrm{L}}(v)+I_{\mathrm{app}}]\). We then have the one-dimensional PDMP

*v*, it follows that there exists an invariant interval for the voltage dynamics. In particular, let \(v_{0}\) denote the voltage for which \(\dot{v}=0\) when \(n=0\), and let \(v_{N}\) be the corresponding voltage when \(n=N\), that is, \(g(v_{0})=0\) and \(f_{\mathrm{Ca}}(v_{N})-g(v_{N})=0\). Then \(v(t)\in[v_{0},v_{N}]\) if \(v(0)\in[v_{0},v_{N}]\). In the fast switching limit \(\varepsilon \rightarrow0\), we obtain the first-order deterministic rate equation

*Ψ*, it is straightforward to show that equation (3.18) exhibits bistability for a range of stimuli \(I_{\mathrm{app}}\), that is, there exist two stable fixed points \(v_{\pm}\) separated by an unstable fixed point \(v_{0}\); see Fig. 5. The problem of the spontaneous initiation of an action potential for small but finite

*ε*thus reduces to an escape problem for a stochastic hybrid system, as outlined in Sect. 2.3.

### 3.2 Metastability in the Stochastic Ion Channel Model

*Γ*and \(\varLambda_{0}\):

*n*and terms linear in

*n*yields the pair of equations

*Γ*from these equation gives

*τ*, where

*τ*is the mean escape time.

*Γ*is then determined by canceling terms independent of

*m*:

*w*fixed. The resulting stochastic bistable model supported the generation of SAPs due to fluctuations in the opening and closing of fast Ca

^{2+}or Na

^{+}channels. However, it is also possible to generate a SAP due to fluctuations causing several K channels to close simultaneously, effectively decreasing

*w*, and thereby causing

*v*to rise. It follows that keeping

*w*fixed in the stochastic model excludes the latter mechanism, and thus the resulting MFPT calculation underestimates the spontaneous rate of action potentials. To investigate this phenomenon, it is necessary to consider the full stochastic ML model given by equations (3.9) with a multiplicative noise term added to the dynamics of the recovery variable, which takes into account a finite number

*M*of potassium ion channels. An additional complication is that the full model is an excitable rather than a bistable system, so it is not straightforward to relate the generation of SAPs with a noise-induced escape problem. Nevertheless, Newby et al. [110, 114] used WKB methods to identify the most probable paths of escape from the resting state and obtained the following results:

- (i)
The most probable paths of escape dip significantly below the resting value for

*w*, indicating a breakdown of the deterministic slow/fast decomposition. - (ii)
Escape trajectories all pass through a narrow region of state space (bottleneck or stochastic saddle node) so that, although there is no well-defined separatrix for an excitable system, it is possible to formulate an escape problem by determining the MFPT to reach the bottleneck from the resting state.

## 4 Stochastic Gap Junctions and Randomly Switching Environments

To introduce the basic theory, we begin with the simpler problem of diffusion in a bounded interval with a randomly switching exterior boundary [11, 92]. The latter can represent the random opening and closing of a stochastic ion channel in the plasma membrane of a cell or a subcellular compartment [17].

### 4.1 Diffusion on an Interval with a Switching Exterior Boundary

*α*,

*β*:

*x*at time

*t*given the realization \(\sigma(T)\) up to time

*T*. The population density evolves according to the diffusion equation

*u*satisfying the boundary conditions

*η*. Note that equation (4.2a)–(4.2b) only holds between jumps in the state of the gate, so that it is an example of a piecewise deterministic PDE. Since each realization of the gate will typically generate a different solution \(u(x,t)\), it follows that \(u(x,t)\) is a random field.

#### 4.1.1 Derivation of Moment Equations

In [18] a method has been developed for deriving moment equations of the stochastic density \(u(x,t)\) in the case of particles diffusing in a domain with randomly switching boundary conditions. The basic approach is to discretize the piecewise deterministic diffusion equation (4.2a)–(4.2b) with respect to space using a finite-difference scheme and then to construct the differential CK equation for the resulting finite-dimensional stochastic hybrid system. One of the nice features of finite-differences is that we can incorporate the boundary conditions into the resulting discrete linear operators. Since the CK equation is linear in the dependent variables, we can derive a closed set of moment equations for the discretized density and then retake the continuum limit. (For an alternative, probabilistic approach to deriving moment equations, see [90].)

*a*such that \((N+1)a=L\) for integer

*N*and let \(u_{j}=u(aj)\), \(j=0,\ldots, N+1\). Then we obtain the PDMP

*A*is the matrix

**u**give (after integrating by parts and using that \(p_{n}(\mathbf {u},t)\rightarrow0\) as \(\mathbf {u}\rightarrow\infty\) by the maximum principle)

*r*th-order moments \(r\geq2\) can be obtained in a similar fashion. Let

**u**give (after integration by parts)

#### 4.1.2 Analysis of First-Order Moments

*κ*by setting \(x=L\):

### 4.2 Diffusive Flux Along a One-Dimensional Array of Electrically Coupled Neurons

*M*cells that are connected via gap junctions, see Fig. 10. For the moment, we ignore the effects of stochastic gating. Since gap junctions have relatively high resistance to flow compared to the cytoplasm, we assume that each intercellular membrane junction acts like an effective resistive pore with some permeability

*μ*. Suppose that we label the cells by an integer

*k*, \(k=1,\ldots,M\), and take the length of each cell to be

*L*. Let \(u(x,t)\) for \(x\in([k-1]L,kL)\) denote the particle concentration within the interior of the

*k*th cell, and assume that it evolves according to the diffusion equation

*M*,

### 4.3 Effective Permeability for Cells Coupled by Stochastically Gated Gap Junctions

This deterministic model has recently been extended to incorporate the effects of stochastically gated gap junctions [12]. The resulting model can be analyzed by extending the theory of diffusion in domains with randomly switching exterior boundaries [18] (see Sect. 4.1) to the case of switching interior boundaries. Solving the resulting first-order moment equations of the stochastic concentration allows us to calculate the mean steady-state concentration and flux, and thus extract the effective single-gate permeability of the gap junctions.

*l*and \(2L-l\) with \(0< l\leq L\). The basic problem can be formulated as follows: We wish to solve the diffusion equation in the open domain \(\varOmega=\varOmega_{1}\cup\varOmega_{2}\) with \(\varOmega_{1}=(0,l)\) and \(\varOmega_{2}=(l,2L)\), with the interior boundary between the two subdomains at \(x=l\) randomly switching between an open and a closed state. Let \(N(t)\) denote the discrete state of the gate at time

*t*with \(N(t)=0\) if the gate is closed and \(N(t)=1\) if it is open. Assume that transitions between the two states \(n=0,1\) are described by the two-state Markov process (4.1). The random opening and closing of the gate means that particles diffuse in a random environment according to the piecewise deterministic equation

*u*satisfying Dirichlet boundary conditions on the exterior boundaries of

*Ω*,

#### 4.3.1 First-Order Moment Equations and Effective Permeability (\(M=2\))

*B*,

*C*in terms of \(K_{1}\) so that we find

#### 4.3.2 Multicell Model (\(M>2\))

*M*identical cells of length

*L*coupled by \(M-1\) gap junctions at positions \(x=l_{k}=kL\), \(1\leq k \leq M-1\); see Fig. 10. (Interestingly, such a model is formally equivalent to a signaling model analyzed in [94].) The analysis is considerably more involved if the gap junctions physically switch because there are significant statistical correlations arising from the fact that all the particles move in the same random environment, which exists in \(2^{M-1}\) different states if the gates switch independently [12]. Therefore we will restrict the analysis to the simpler problem in which individual particles independently switch conformational states: if a particle is in state \(N(t)=0\), then it cannot pass through a gate, whereas if it is in state \(N(t)=1\), then it can. Hence, from the particle perspective, either all gates are open, or all gates are closed. If \(V_{n}(x,t)\) is the concentration of particles in state

*n*, then we have the pair of PDEs given by equations (4.13a) and (4.13b) on the domain \(x\in[0,ML]\), except now the exterior boundary conditions are

*j*th gate are

*M*cells with \(M-1\) independent, stochastically gated gap junctions is

*M*-dependent with

### 4.4 Volume Neurotransmission

*u*satisfying the boundary conditions

*x*with [91]

*α*is the switching rate from the quiescent state to the firing state, and

*β*is the switching rate of the reverse transition. Thus we observe the same mean concentration

*V*throughout the extracellular domain, even though some parts are further away from the source than others. Consistent with intuition,

*V*increases with

*μ*, which reflects the fact that the neuron on the boundary fires more often. Now suppose that both

*α*and

*β*become large (fast switching) but their ratio

*μ*is fixed. In this case,

*η*becomes large, and \(V\rightarrow0\). This is due to the fact that any neurotransmitter that is released is rapidly reabsorbed at the same terminal. (Note that if the left-hand boundary is taken to be absorbing rather than reflecting, \(u(0,t)=0\), then the concentration is a linear function of

*x*; this could represent a glial cell on the left-hand boundary, which absorbs neurotransmitter but does not fire.) The authors also consider the case where there is a source neuron at each end, so that each boundary switches according to an independent two-state Markov process. If we denote the two Markov processes by the discrete variables \(M(t)\in\{0,1\}\) and \(N(t)\in\{0,1\}\), respectively, then the boundary conditions become [91]

*x*.

## 5 Stochastic Vesicular Transport in Axons and Dendrites

The efficient delivery of mRNA, proteins, and other molecular products to their correct location within a cell (intracellular transport) is of fundamental importance to normal cellular function and development [1, 23]. The challenges of intracellular transport are particularly acute for neurons, which are amongst the largest and most complex cells in biology, in particular, with regards to the efficient trafficking of newly synthesized proteins from the cell body or soma to distant locations on the axon and dendrites. In healthy cells, the regulation of mRNA and protein trafficking within a neuron provides an important mechanism for modifying the strength of synaptic connections between neurons [9, 34, 72, 139], and synaptic plasticity is generally believed to be the cellular substrate of learning and memory. On the other hand, various types of dysfunction in protein trafficking appear to be a major contributory factor to a number of neurodegenerative diseases associated with memory loss, including Alzheimer’s [38].

Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion within the cytosol or the surrounding plasma membrane of the cell, and active motor-driven transport along polymerized filaments such as microtubules and F-actin that comprise the cytoskeleton. Newly synthesized products from the nucleus are mainly transported to other intracellular compartments or the cell membrane via a microtubular network that projects radially from organizing centres (centrosomes) and forms parallel fiber bundles within axons and dendrites. The same network is used to transport degraded cell products back to the nucleus. Moreover, various animal viruses including HIV take advantage of microtubule-based transport in order to reach the nucleus from the cell surface and release their genome through nuclear pores [36]. Microtubules are polarized filaments with biophysically distinct plus and minus ends. In general, a given molecular motor will move with a bias toward a specific end of the microtubule; for example, kinesin moves toward the (+) end and dynein moves toward the (−) end. Microtubules are arranged throughout an axon or dendrite with a distribution of polarities: in axons and distal dendrites, they are aligned with the (−) ends pointing to the soma (plus-end-out), and in proximal dendrites, they have mixed polarity.

Axons of neurons can extend up to 1 m in large organisms, but synthesis of many of its components occurs in the cell body. Axonal transport is typically divided into three main categories based upon the observed speed [29]: fast transport (1–9 *μ*m/s) of organelles and vesicles and slow transport (0.004–0.6 *μ*m/s) of soluble proteins and cytoskeletal elements. Slow transport is further divided into two groups; actin and actin-bound proteins are transported in slow component A, whereas cytoskeletal polymers such as microtubules and neurofilaments are transported in slow component B. It had originally been assumed that the differences between fast and slow components were due to differences in transport mechanisms, but direct experimental observations now indicate that they all involve fast motors but differ in how the motors are regulated. Membranous organelles, which function primarily to deliver membrane and protein components to sites along the axon and at the axon tip, move rapidly in a unidirectional manner, pausing only briefly. In other words, they have a high duty ratio—the proportion of time a cargo complex is actually moving. On the other hand, cytoskeletal polymers and mitochondria move in an intermittent and bidirectional manner, pausing more often and for longer time intervals, and sometimes reversing direction. Such a transport has a low duty ratio.

### 5.1 Intracellular Transport as a Velocity Jump Process

*α*,

*β*are the corresponding switching rates, which can depend on the current position

*x*. In applications, we are typically interested in the marginal density \(p(x,t)=p_{0}(x,t)+p_{1}(x,t)\), which can be used to calculate moments of

*p*such as the mean and variance,

*n*th order, and

*Θ*is the Heaviside function. The first two terms clearly represent the ballistic propagation of the initial data along characteristics \(x=\pm vt\), whereas the Bessel function terms asymptotically approach Gaussians in the large time limit. The steady-state equation for \(p(x)\) is simply \(p''(x)=0\), which from integrability means that \(p(x)=0\) pointwise. This is consistent with the observation that the above explicit solution satisfies \(p(x,t)\rightarrow0\) as \(t\rightarrow \infty\).

*u*is the effective speed, \(u=v{p_{1}^{\mathrm{ss}}}/{\sum_{j=1}^{n}p_{j}^{\mathrm{ss}}}\), and \(\mathbf{p}^{\mathrm{ss}}\) is the steady-state solution for which \(\mathbf{A}\mathbf{p}^{\mathrm{ss}}=0\). They then showed that \(Q_{\varepsilon }(s,t)\rightarrow Q_{0}(s,t)\) as \(\varepsilon \rightarrow0\), where \(Q_{0}\) is a solution to the diffusion equation

*H*the Heaviside function. The diffusivity

*D*can be calculated in terms of

*v*and the transition matrix

**A**. Hence the propagating and spreading waves observed in experiments could be interpreted as solutions to an effective advection–diffusion equation. More recently, [56, 57] have developed a more rigorous analysis of spreading waves. Note that the large time behavior is consistent with the solution of the diffusion equation obtained in the fast switching limit.

*a*), anterograde pausing on track (\(a_{0}\) state), anterograde pausing off track (state \(a_{p}\)), retrograde pausing on track (state \(r_{0}\)), retrograde pausing off track (state \(r_{p}\)), and retrograde moving on track (state

*r*). The state transition diagram is shown in Fig. 14.

### 5.2 Tug-of-War Model of Bidirectional Motor Transport

The observation that many types of motor-driven cargo move bidirectionally along microtubules suggests that cargo is transported by multiple kinesin and dynein motors. In proximal dendrites, it is also possible that one or more identical motors move a cargo bidirectionally by switching between microtubules with different polarities. In either case, it is well established that multiple molecular motors often work together as a motor-complex to pull a single cargo [144]. An open question concerns how the set of molecular motors pulling a vesicular cargo are coordinated. One possibility is that the motors compete against each other in a tug-of-war where an individual motor interacts with other motors through the force it exerts on the cargo. If the cargo places a force on a motor in the opposite direction it prefers to move, then it will be more likely to unbind from the microtubule. A recent biophysical model has shown that a tug-of-war can explain the coordinated behavior observed in certain animal models [101, 102].

*t*, the internal state of the cargo-motor complex is fully characterized by the numbers \(n_{+}\) and \(n_{-}\) of anterograde and retrograde motors that are bound to a microtubule and thus actively pulling on the cargo. Assume that over the time-scales of interest all motors are permanently bound to the cargo, so that \(0 \leq n_{\pm}\leq N_{\pm}\). The tug-of-war model of Muller et al. [101, 102] assumes that the motors act independently, other than exerting a load on motors with the opposite directional preference. (However, some experimental work suggests that this is an oversimplification, that is, there is some direct coupling between motors [42]). Thus the properties of the motor complex can be determined from the corresponding properties of the individual motors together with a specification of the effective load on each motor. There are two distinct mechanisms whereby such bidirectional transport could be implemented [102]. First, the track could consist of a single polarized microtubule filament (or a chain of such filaments) on which up to \(N_{+}\) kinesin motors and \(N_{-}\) dynein motors can attach; see Fig. 15. Since individual kinesin and dynein motors have different biophysical properties, with the former tending to exert more force on a load, it follows that even when \(N_{+}=N_{-}\), the motion will be biased in the anterograde direction. Hence, this version is referred to as an asymmetric tug-of-war model. Alternatively, the track could consist of two parallel microtubule filaments of opposite polarity such that \(N_{+}\) kinesin motors can attach to one filament and \(N_{-}\) to the other. In the latter case, if \(N_{+}=N_{-}\), then the resulting bidirectional transport is unbiased resulting in a symmetric tug-of-war model.

*F*is the applied force in the retrograde direction, \(F_{s}\) is the stall force satisfying \(v(F_{s})=0\), \(v_{f}\) is the forward motor velocity in the absence of an applied force in the preferred direction of the particular motor, and \(v_{b}\) is the backward motor velocity when the applied force exceeds the stall force. Dynein motors will also be taken to satisfy a linear force-velocity relation:

*F*is the force in the anterograde direction. Since the parameters associated with kinesin and dynein motors are different, we distinguish the latter by taking \(F_{s}\rightarrow\widehat{F}_{s}\) etc. The original tug-of-war model assumes that the binding rate of kinesin is independent of the applied force, whereas the unbinding rate is taken to be an exponential function of the applied force:

*x*-dependent mean velocity. Suppose, for example, that \(\overline{F}(x)=\bar{v}>0\) for \(x\notin [X-l,X+l]\) and \(\overline{F}(x)\) a unimodal function for \(x\in [X-l,X+l]\) with a negative minimum at \(x=X\). Here we are taking the region of enhanced

*τ*to be an interval of length 2

*l*centered about \(x=X\). Writing \(\overline{F}(x)=-\varPsi'(x-X)\), the corresponding deterministic potential has the form shown in Fig. 16. Since the mean velocity switches sign within the domain \([X-l,X+l]\), it follows that there exists one stable fixed point \(x_{0}\) and an unstable fixed point \(x_{*}\).

One interesting effect of a local increase in MAPs is that it can generate stochastic oscillations in the motion of the motor-complex [113]. As a kinesin driven cargo encounters the MAP-coated trapping region, the motors unbind at their usual rate and can’t rebind. Once the dynein motors are strong enough to pull the remaining kinesin motors off the microtubule, the motor-complex quickly transitions to (−) end directed transport. After the dynein-driven cargo leaves the MAP-coated region, kinesin motors can then reestablish (+) end directed transport until the motor-complex returns to the MAP-coated region. This process repeats until the motor-complex is able to move forward past the MAP-coated region. Interestingly, particle tracking experiments have observed oscillatory behavior during mRNA transport in dendrites [44, 125]. In these experiments, motor-driven mRNA granules move rapidly until encountering a fixed location along the dendrite where they slightly overshoot then stop, move backward, and begin to randomly oscillate back and forth. After a period of time, lasting on the order of minutes, the motor-driven mRNA stops oscillating and resumes fast ballistic motion. Calculating the mean time to escape, the target can be formulated as an FPT problem, in which the particle starts at \(x=x_{0}\) and has to make a rare transition to the unstable fixed point at \(x=x_{*}\). As in the analogous problem of stochastic action potential generation (Sect. 3), the QSS diffusion approximation breaks down for small *ε*, and we have to use the asymptotic methods of Sect. 2.3. The details can be found elsewhere [115].

### 5.3 Synaptic Democracy

A number of recent experimental studies of intracellular transport in axons of *C. elegans* and *Drosophila* have shown that (i) motor-driven vesicular cargo exhibits “stop and go” behavior, in which periods of ballistic anterograde or retrograde transport are interspersed by long pauses at presynaptic sites, and (ii) the capture of vesicles by synapses during the pauses is reversible in the sense that the aggregation of vesicles can be inhibited by signaling molecules resulting in dissociation from the target [96, 148]. It has thus been hypothesized that the combination of inefficient capture at presynaptic sites and the back-and-forth motion of motor-cargo complexes between proximal and distal ends of the axon facilitates a more uniform distribution of resources, that is, greater “synaptic democracy” [96].

The idea of synaptic democracy has previously arisen within the context of equalizing synaptic efficacies, that is, ensuring that synapses have the same potential for affecting the postsynaptic response regardless of their locations along the dendritic tree [71, 126]. An analogous issue arises within the context intracellular transport, since vesicles are injected from the soma (anterograde transport) so that one might expect synapses proximal to the soma to be preferentially supplied with resources. In principle, this could be resolved by routing cargo to specific synaptic targets, but there is no known form of molecular address system that could support such a mechanism, particularly in light of the dynamically changing distribution of synapses. From a mathematical perspective, the issue of synaptic democracy reflects a fundamental property shared by the one-dimensional advection–diffusion equation used to model active transport and the cable equation used to model ionic current flow, namely, they generate an exponentially decaying steady-state solution in response to a localized source of active particles or current.

*t*the complex is at position

*x*, \(x\in(0,\infty)\), is in motile state

*j*, and a vesicle is not bound to the complex. Similarly, let \(\widehat {p}_{n}(x,t)\) be the corresponding probability density when a vesicle is bound. We allow for the possibility that the velocities and diffusivity are different for the bound state by taking \(v_{\pm}\rightarrow \widehat{v}_{\pm}\) and \(D_{0}\rightarrow\widehat{D}_{0}\). The evolution of the probability density is described by the following system of partial differential equations:

*α*,

*β*are the transition rates between the slowly diffusing and ballistic states. We also assume that there is a uniform distribution

*c*of presynaptic targets along the axon, which can exchange vesicles with the motor-complex at the rates \(k_{\pm}\).

*α*,

*β*are fast compared to the exchange rates \(k_{\pm}\) and the effective displacement rates of the complex on a fundamental microscopic length-scale such as the size of a synaptic target (\(l\sim1~\mu\mbox{m}\)). Following Sect. 2.2, we can then use a QSS diffusion approximation to derive an advection–diffusion equation for the total probability densities

*γu*and

*γû*, which account for the fact that motor-complexes may dysfunction and no longer exchange cargo with synaptic targets. Equations (5.20a)–(5.20b) are supplemented by the following boundary conditions at \(x=0\):

*c*of presynaptic vesicles will remain bounded, provided that \(J_{0}>0\). Equation (5.21) implies that, at steady state,

This mechanism for synaptic democracy appears to be quite robust. For example, it can be extended to the case where each motor carries a vesicular aggregate rather than a single vesicle, assuming that only one vesicle can be exchanged with a target at any one time [13]. The effects of reversible vesicular delivery also persist when exclusion effects between between motor-cargo complexes are taken into account [16] and when higher-dimensional cell geometries are considered [78].

## 6 Phase Reduction of Stochastic Hybrid Oscillators

In Sects. 2.3 and 3 we assumed that, in the adiabatic limit \(\varepsilon \rightarrow0\), the resulting deterministic dynamical system exhibited bistability, and we explored how random switching of the associated PDMP for small *ε* can lead to noise-induced transitions between metastable states. In this section, we assume that the deterministic system supports a stable limit cycle so that the corresponding PDMP acts as a stochastic limit cycle oscillator, at least in the weak noise regime. There is an enormous literature on the analysis of stochastic limit cycle oscillators for SDEs (for recent surveys, see the reviews [3, 47, 105]). On the other hand, as far as we are aware, there has been very little numerical or analytical work on limit cycle oscillations in PDMPs. A few notable exceptions are [21, 27, 52, 89, 137]. One possible approach would be to carry out a QSS diffusion approximation of the PDMP along the lines of Sect. 2.2 and then use stochastic phase reduction methods developed for SDEs. In this section, we review an alternative, variational method that deals directly with the PDMP [21], thus avoiding additional errors arising from the diffusion approximation. Another major advantage of the variational method is that it allows us to obtain rigorous exponential bounds on the expected time to escape from a neighborhood of the limit cycle [21, 22].

*d*-dimensional vector of independent Wiener processes. If the noise amplitude

*ε*is sufficiently small relative to the rate of attraction to the limit cycle, then deviations transverse to the limit cycle are also small (up to some exponentially large stopping time). This suggests that the definition of a phase variable persists in the stochastic setting, and we can derive a stochastic phase equation by decomposing the solution to the SDE according to

*β*, which reflects the fact that there are different ways of projecting the exact solution onto the limit cycle [7, 21, 65, 87, 147]; see Fig. 20. One well-known approach is to use the method of isochrons [47, 62, 106, 135, 136, 149]. Recently, a variational method for carrying out the amplitude-phase decomposition for SDEs has been developed, which yields exact SDEs for the amplitude and phase [22]. Within the variational framework, different choices of phase correspond to different choices of the inner product space \(\mathbb {R}^{d}\). By taking an appropriately weighted Euclidean norm the minimization scheme determined the phase by projecting the full solution on to the limit cycle using Floquet vectors. Hence, in a neighborhood of the limit cycle the phase variable coincided with the isochronal phase [7]. This had the advantage that the amplitude and phase decoupled to leading order. In addition, the exact amplitude and phase equations could be used to derive strong exponential bounds on the growth of transverse fluctuations. It turns out that an analogous variational method can be applied to PDMPs [21], which will be outlined in the remainder of this section.

*π*-periodic function

*Φ*. Note that the phase is neutrally stable with respect to perturbations along the limit cycle—this reflects invariance of an autonomous dynamical system with respect to time shifts. By definition,

*Φ*must satisfy the equation

*θ*gives

*J̅*is the 2

*π*-periodic Jacobian matrix

*α*such that if \(x(0) = y\) and

### 6.1 Floquet Decomposition

*T*, \(\sigma_{T}=\{N(t),0\leq t \leq T\}\). Suppose that there is a finite sequence of jump times \(\{t_{1},\ldots, t_{r}\}\) within the time interval \((0,T)\) and let \(n_{j}\) be the corresponding discrete state in the interval \((t_{j},t_{j+1})\) with \(t_{0}=0\). Introduce the set

*π*-periodic matrix whose first column is proportional to \(\varPhi'(\omega_{0}t)\), and \(\nu_{1} = 0\). That is, \(P(\theta)^{-1}\varPhi'(\theta) =c_{0}\mathbf{e}\) with \(\mathbf{e}_{j}=\delta _{1,j}\) and \(c_{0}\) an arbitrary constant; we set \(c_{0}=1\) for convenience. To simplify the following notation, we will assume throughout this paper that the Floquet multipliers are real and hence \(P(\theta)\) is a real matrix. We can readily generalize these results to the case that \(\mathscr {S}\) is complex. The limit cycle is taken to be stable, meaning that, for a constant \(b > 0\) and all \(2\leq i \leq d\), we have \(\nu_{i} \leq- b\). Furthermore, \(P^{-1}(\theta)\) exists for all

*θ*, since \(\varPi ^{-1}(t)\) exists for all

*t*.

### 6.2 Defining the Piecewise Deterministic Phase Using a Variational Principle

*τ*, there exists a unique continuous solution to this equation. Define \(\mathfrak {M}(z,\varphi) \in \mathbb {R}\) as

### 6.3 Weak Noise Limit

### 6.4 Decay of Amplitude Vector

*ε*if the dynamics remains within some attracting neighborhood of the limit cycle, that is, the amplitude remains small. Since the amplitude \(v_{t}\) satisfies \(\sqrt{ \varepsilon }v_{t}=x_{t}-\varPhi_{\beta_{t}}\), we have

*C*such that the probability that the expected time to leave an \(O(a)\) neighborhood of the limit cycle is less than

*T*scales as \(T\exp (-{Ca}/{\varepsilon } )\). An interesting difference between this bound and the corresponding one obtained for SDEs [22] is that in the latter the bound is of the form \(T\exp (-{Cba}/{\varepsilon } )\), where

*b*is the rate of decay toward the limit cycle. In other words, in the SDE case, the bound is still powerful in the large

*ε*case, as long as \(b\varepsilon ^{-1} \gg1\), that is, as long as the decay toward the limit cycle dominates the noise. However, this no longer holds in the PDMP case. Now, if

*ε*is large, then the most likely way that the system can escape the limit cycle is that in stays in any particular state for too long without jumping, and the time that it stays in one state is not particularly affected by

*b*(in most cases).

### 6.5 Synchronization of Hybrid Oscillators

As we have outlined previously, it is possible to apply phase reduction techniques to PDMPs that support a limit cycle in the fast switching limit [21]. One of the important consequences of this reduction is that it provides a framework for studying the synchronization of a population of PDMP oscillators, either through direct coupling or via a common noise source. In the case of SDEs, there there have been considerable recent interest in noise-induced phase synchronization [47, 62, 106, 135, 136, 149]. This concerns the observation that a population of oscillators can be synchronized by a randomly fluctuating external input applied globally to all of the oscillators, even if there are no interactions between the oscillators. Evidence for such an effect has been found in experimental studies of neural oscillations in the olfactory bulb [59] and the synchronization of synthetic genetic oscillators [151]. A related phenomenon is the reproducibility of a dynamical system response when repetitively driven by the same fluctuating input, even though initial conditions vary across trials. One example is the spike-time reliability of single neurons [60, 98].

*M*identical noninteracting oscillators according to the system of equations \(\dot{x}_{j}=F(x_{j})+I(t)\), where \(x_{j}\in \mathbb {R}^{d}\) is the state of the

*j*th oscillator, \(j=1,\ldots,M\) [104]; see Fig. 23. Here \(I(t)\) switches between two values \(I_{0}\) and \(I_{1}\) at random times generated by a two-state Markov chain [5]. (In the case of the classical ML model, \(I(t)\) could represent a randomly switching external current.) That is, \(I(t)=I_{0}(1-N(t))+I_{1}N(t)\) for \(N(t)\in\{0,1\}\), with the time

*T*between switching events taken to be exponentially distributed with mean switching time

*τ*. Suppose that each oscillator supports a stable limit cycle for each of the two input values \(I_{0}\) and \(I_{1}\). It follows that the internal state of each oscillator randomly jumps between the two limit cycles. Nagai et al. [104] show that in the slow switching limit (large

*τ*), the dynamics can be described by random phase maps. Moreover, if the phase maps are monotonic, then the associated Lyapunov exponent is generally negative, and phase synchronization is stable.

More generally, let \(N(t) \in\varGamma\equiv\{0,\ldots,N_{0}-1\}\) denote the state of a randomly switching environment. When the environmental state is \(N(t)=n\), each oscillator \(x_{i}(t)\) evolves according to the piecewise deterministic differential equation \(\dot{x}_{i}=F_{n}(x_{i})\) for \(i=1,\ldots,M\). The additive dichotomous noise case is recovered by taking \(N_{0}=2\) and \(F_{n}(x)=F(x)+I_{n}\). In the slow switching limit, we can generalize the approach of Nagai et al. [104] by assuming that each of the vector fields \(F_{n}(x_{i})\), \(n\in\varGamma\), supports a stable limit cycle and constructing the associated random phase maps. Here we briefly discuss the fast switching regime, assuming that in the adiabatic limit \(\varepsilon \rightarrow0\), the resulting deterministic system \(\dot {x}_{i}=\overline{F}(x_{i})\) supports a stable limit cycle. Since there is no coupling or remaining external drive to the oscillators in this limit, their phases are uncorrelated. This then raises the issue as to whether or not phase synchronization occurs when \(\varepsilon >0\).

Again, one approach would be to carry out a QSS analysis along the lines of Sect. 2.2, in which each oscillator is approximated by an SDE with a common Gaussian input. We could then adapt previous work on the phase reduction of stochastic limit cycle oscillators [62, 106, 135, 136] and thus establish that phase synchronization occurs under the diffusion approximation. However, the QSS approximation is only intended to be accurate over time-scales that are longer than \(O(\varepsilon )\). Hence, it is unclear whether or not the associated Lyapunov exponent is accurate, since it is obtained from averaging the fluctuations in the noise over infinitesimally small time-scales. Therefore, it would be interesting to derive a more accurate expression for the Lyapunov exponent by working directly with an exact implicit equation for the phase dynamics such as the population analog of equation (6.24).

## 7 Conclusion

- 1.
Solving the stationary version of the CK equation (2.6) for higher-dimensional stochastic hybrid systems with multiple discrete states; developing an ergodic theory of PDMPs. (See also the recent paper by Lawley et al. [4])

- 2.
Calculating the Perron eigenvalue (Hamiltonian) of equation (2.39) for a wider range of models; currently, only a few exact solutions are known such as the ion channel model of Sect. 3; extending the theory of metastability to PDMPs with infinite Markov chains, where the Perron–Frobenius theorem does not necessarily hold.

- 3.
Developing more detailed biophysical models of the transfer of vesicles between motor-complexes and synaptic targets; identifying local signaling mechanisms for synaptic targeting; incorporating the contribution of intracellular stores; coupling mRNA transport to long-term synaptic plasticity.

- 4.
Solving the diffusion equation with randomly switching boundary conditions when the switching of a gate depends, for example, on the local particle concentration; solving higher-dimensional boundary value problems; analyzing higher-order moments of the stochastic concentration.

- 5.
Analyzing the synchronization of stochastic hybrid oscillators driven by a common environmental switching process; extending the theory to take into account a partial dependence of the switching process on the continuous dynamics of each oscillator.

- 6.
Modeling synaptically coupled neural networks as a stochastic hybrid system, where the individual spikes of a neural population are treated as the discrete process, and the synaptic currents driving the neurons to fire correspond to the continuous process. So far, stochastic hybrid neural networks are phenomenologically based [11, 24]. Can such networks be derived from a more fundamental microscopic theory, and is there a way of distinguishing the output activity of hybrid networks from those driven, for example, by Gaussian noise?

## Declarations

### Acknowledgements

Not applicable.

### Availability of data and materials

Data sharing not applicable to this paper as no datasets were generated or analyzed during the current study.

### Funding

PCB and JNM were supported by the National Science Foundation (DMS-1613048).

### Authors’ contributions

Sections 1–5 are based on tutorial lectures PCB gave at ICNMS (2017). Section 6 is a review of recent research by PCB and JMN. Both authors read and approved the final manuscript.

### Ethics approval and consent to participate

Not applicable.

### Competing interests

The authors declare that they have no competing interests.

### Consent for publication

Not applicable.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Alberts B, Johnson A, Lewis J, Raff M, Walter KR. Molecular biology of the cell. 5th ed. New York: Garland; 2008. Google Scholar
- Anderson DF, Ermentrout GB, Thomas PJ. Stochastic representations of ion channel kinetics and exact stochastic simulation of neuronal dynamics. J Comput Neurosci. 2015;38:67–82. MathSciNetView ArticleGoogle Scholar
- Ashwin P, Coombes S, Nicks R. Mathematical frameworks for oscillatory network dynamics in neuroscience. J Math Neurosci. 2016;6:2. MathSciNetMATHView ArticleGoogle Scholar
- Bakhtin Y, Hurth T, Lawley SD, Mattingly JC. Smooth invariant densities for random switching on the torus. Preprint. arXiv:1708.01390 (2017).
- Bena I. Dichotomous Markov noise: exact results for out-of-equilibrium systems. Int J Mod Phys B. 2006;20:2825–88. MathSciNetMATHView ArticleGoogle Scholar
- Blum J, Reed MC. A model for slow axonal transport and its application to neurofilamentous neuropathies. Cell Motil Cytoskelet. 1989;12:53–65. View ArticleGoogle Scholar
- Bonnin M. Amplitude and phase dynamics of noisy oscillators. Int J Circuit Theory Appl. 2017;45:636–59. View ArticleGoogle Scholar
- Bramham CR, Wells DG. Dendritic mRNA: transport, translation and function. Nat Rev Neurosci. 2007;8:776–89. View ArticleGoogle Scholar
- Bredt DS, Nicoll RA. AMPA receptor trafficking at excitatory synapses. Neuron. 2003;40:361–79. View ArticleGoogle Scholar
- Bressloff PC. Stochastic processes in cell biology. Berlin: Springer; 2014. MATHView ArticleGoogle Scholar
- Bressloff PC. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks. J Math Neurosci. 2015;5:4. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC. Diffusion in cells with stochastically-gated gap junctions. SIAM J Appl Math. 2016;76:1658–82. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC. Aggregation-fragmentation model of vesicular transport in neurons. J Phys A. 2016;49:145601. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC. Topical review: stochastic switching in biology: from genotype to phenotype. J Phys A. 2017;50:133001. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Faugeras O. On the Hamiltonian structure of large deviations in stochastic hybrid system. J Stat Mech. 2017;033206. Google Scholar
- Bressloff PC, Karamched B. Model of reversible vesicular transport with exclusion. J Phys A. 2016;49:345602. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Lawley SD. Escape from subcellular domains with randomly switching boundaries. Multiscale Model Simul. 2015;13:1420–45. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Lawley SD. Moment equations for a piecewise deterministic PDE. J Phys A. 2015;48:105001. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Lawley SD. Diffusion on a tree with stochastically-gated nodes. J Phys A. 2016;49:245601. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Levien E. Synaptic democracy and active intracellular transport in axons. Phys Rev Lett. 2015;114:168101. View ArticleGoogle Scholar
- Bressloff PC, MaClaurin JN. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems. Chaos. 2018;28:063105. MathSciNetView ArticleGoogle Scholar
- Bressloff PC, MaClaurin JN. A variational method for analyzing stochastic limit cycle oscillators. SIAM J Appl Math. In press 2018. Google Scholar
- Bressloff PC, Newby JM. Stochastic models of intracellular transport. Rev Mod Phys. 2013;85:135–96. View ArticleGoogle Scholar
- Bressloff PC, Newby JM. Metastability in a stochastic neural network modeled as a jump velocity Markov process. SIAM J Appl Dyn Syst. 2013;12:1394–435. MathSciNetMATHView ArticleGoogle Scholar
- Bressloff PC, Newby JM. Path-integrals and large deviations in stochastic hybrid systems. Phys Rev E. 2014;89:042701. View ArticleGoogle Scholar
- Bressloff PC, Newby JM. Stochastic hybrid model of spontaneous dendritic NMDA spikes. Phys Biol. 2014;11:016006. View ArticleGoogle Scholar
- Brooks HA, Bressloff PC. Quasicycles in the stochastic hybrid Morris–Lecar neural model. Phys Rev E. 2015;92:012704. MathSciNetView ArticleGoogle Scholar
- Brown A. Slow axonal transport: stop and go traffic in the axon. Nat Rev Mol Cell Biol. 2000;1:153–6. View ArticleGoogle Scholar
- Brown A. Axonal transport of membranous and nonmembranous cargoes: a unified perspective. J Cell Biol. 2003;160:817–21. View ArticleGoogle Scholar
- Buckwar E, Riedler MG. An exact stochastic hybrid model of excitable membranes including spatio-temporal evolution. J Math Biol. 2011;63:1051–93. MathSciNetMATHView ArticleGoogle Scholar
- Bukauskas FK, Verselis VK. Gap junction channel gating. Biochim Biophys Acta. 2004;1662:42–60. View ArticleGoogle Scholar
- Chow CC, White JA. Spontaneous action potentials due to channel fluctuations. Biophys J. 1996;71:3013–21. View ArticleGoogle Scholar
- Coggan JS, Bartol TM, Esquenazi E, Stiles JR, Lamont S, Martone ME, Berg DK, Ellisman MH, Sejnowski TJ. Evidence for ectopic neurotransmission at a neuronal synapse. Science. 2005;309:446–51. View ArticleGoogle Scholar
- Collinridge GL, Isaac JTR, Wang YT. Receptor trafficking and synaptic plasticity. Nat Rev Neurosci. 2004;5:952–62. View ArticleGoogle Scholar
- Connors BW, Long MA. Electrical synapses in the mammalian brain. Annu Rev Neurosci. 2004;27:393–418. View ArticleGoogle Scholar
- Damm EM, Pelkmans L. Systems biology of virus entry in mammalian cells. Cell Microbiol. 2006;8:1219–27. View ArticleGoogle Scholar
- Davis MHA. Piecewise-deterministic Markov processes: a general class of non-diffusion stochastic models. J R Stat Soc, Ser B, Methodol. 1984;46:353–88. MATHGoogle Scholar
- de Vos KJ, Grierson AJ, Ackerley S, Miller CCJ. Role of axonal transport in neurodegenerative diseases. Annu Rev Neurosci. 2008;31:151–73. View ArticleGoogle Scholar
- Dembo A, Large ZO. Deviations: techniques and applications. 2nd ed. New York: Springer; 2004. Google Scholar
- Doi M. Second quantization representation for classical many-particle systems. J Phys A. 1976;9:1465–77. View ArticleGoogle Scholar
- Doi M. Stochastic theory of diffusion controlled reactions. J Phys A. 1976;9:1479–95. View ArticleGoogle Scholar
- Driver JW, Rodgers AR, Jamison DK, Das RK, Kolomeisky AB, Diehl MR. Coupling between motor proteins determines dynamic behavior of motor protein assemblies. Phys Chem Chem Phys. 2010;12:10398–405. View ArticleGoogle Scholar
- Dykman MI, Mori E, Ross J, Hunt PM. Large fluctuations and optimal paths in chemical kinetics. J Chem Phys A. 1994;100:5735–50. Google Scholar
- Dynes JL, Steward O. Dynamics of bidirectional transport of arc mRNA in neuronal dendrites. J Comp Neurol. 2007;500:433–47. View ArticleGoogle Scholar
- Elgart V, Kamenev A. Rare event statistics in reaction–diffusion systems. Phys Rev E. 2004;70:041106. MathSciNetView ArticleGoogle Scholar
- Ermentrout GB. Simplifying and reducing complex models. In: Computational modeling of genetic and biochemical networks. Cambridge: MIT Press; 2001. p. 307–23. Google Scholar
- Ermentrout GB. Noisy oscillators. In: Laing CR, Lord GJ, editors. Stochastic methods in neuroscience. Oxford: Oxford University Press; 2009. Google Scholar
- Ermentrout GB, Terman D. Mathematical foundations of neuroscience. New York: Springer; 2010. MATHView ArticleGoogle Scholar
- Escudero C, Kamanev A. Switching rates of multistep reactions. Phys Rev E. 2009;79:041149. View ArticleGoogle Scholar
- Evans WJ, Martin PE. Gap junctions: structure and function. Mol Membr Biol. 2002;19:121–36. View ArticleGoogle Scholar
- Faggionato A, Gabrielli D, Crivellari M. Averaging and large deviation principles for fully-coupled piecewise deterministic Markov processes and applications to molecular motors. Markov Process Relat Fields. 2010;16:497–548. MathSciNetMATHGoogle Scholar
- Feng H, Han B, Wang J. Landscape and global stability of nonadiabatic and adiabatic oscillations in a gene network. Biophys J. 2012;102:1001–10. View ArticleGoogle Scholar
- Feng J, Kurtz TG. Large deviations for stochastic processes. Providence: Am. Math. Soc.; 2006. MATHView ArticleGoogle Scholar
- Fox RF, Lu YN. Emergent collective behavior in large numbers of globally coupled independent stochastic ion channels. Phys Rev E. 1994;49:3421–31. View ArticleGoogle Scholar
- Freidlin MI, Wentzell AD. Random perturbations of dynamical systems. New York: Springer; 1998. MATHView ArticleGoogle Scholar
- Friedman A, Craciun G. A model of intracellular transport of particles in an axon. J Math Biol. 2005;51:217–46. MathSciNetMATHView ArticleGoogle Scholar
- Friedman A, Craciun G. Approximate traveling waves in linear reaction-hyperbolic equations. SIAM J Math Anal. 2006;38:741–58. MathSciNetMATHView ArticleGoogle Scholar
- Fuxe K, Dahlstrom AB, Jonsson G, Marcellino D, Guescini M, Dam M, Manger P, Agnati L. The discovery of central monoamine neurons gave volume transmission to the wired brain. Prog Neurobiol. 2010;90:82–100. View ArticleGoogle Scholar
- Galan RF, Ermentrout GB, Urban NN. Optimal time scale for spike-time reliability: theory, simulations and experiments. J Neurophysiol. 2008;99:277–83. View ArticleGoogle Scholar
- Galan RF, Fourcaud-Trocme N, Ermentrout GB, Urban NN. Correlation-induced synchronization of oscillations in olfactory bulb neurons. J Neurosci. 2006;26:3646–55. View ArticleGoogle Scholar
- Gardiner CW. Handbook of stochastic methods. 4th ed. Berlin: Springer; 2009. MATHGoogle Scholar
- Goldobin DS, Pikovsky A. Synchronization and desynchronization of self-sustained oscillators by common noise. Phys Rev E. 2005;71:045201. MathSciNetView ArticleGoogle Scholar
- Goldobin DS, Teramae J, Nakao H, Ermentrout GB. Dynamics of limit-cycle oscillators subject to general noise. Phys Rev Lett. 2010;105:154101. View ArticleGoogle Scholar
- Goldwyn JH, Shea-Brown E. The what and where of adding channel noise to the Hodgkin–Huxley equations. PLoS Comput Biol. 2011;7:e1002247. MathSciNetView ArticleGoogle Scholar
- Gonze D, Halloy J, Gaspard P. Biochemical clocks and molecular noise: theoretical study of robustness factors. J Chem Phys. 2002;116:10997–1010. View ArticleGoogle Scholar
- Goodenough DA, Paul DL. Gap junctions. Cold Spring Harb Perspect Biol. 2009;1:a002576. View ArticleGoogle Scholar
- Gumy LF, Hoogenraad CC. Local mechanisms regulating selective cargo entry and long-range trafficking in axons. Curr Opin Neurubiol. 2018;51:23–8. View ArticleGoogle Scholar
- Gumy LF, Katrukha EA, Grigoriev I, Jaarsma D, Kapitein LC, Akhmanova A, Hoogenraad CC. MAP2 defines a pre-axonal filtering zone to regulate KIF1- versus KIF5-dependent cargo. Neuron. 2017;94:347–62. View ArticleGoogle Scholar
- Hanggi P, Grabert H, Talkner P, Thomas H. Bistable systems: master equation versus Fokker–Planck modeling. Phys Rev A. 1984;29:371–8. MathSciNetView ArticleGoogle Scholar
- Hanggi P, Talkner P, Borkovec M. Reaction rate theory: fifty years after Kramers. Rev Mod Phys. 1990;62:251–341. MathSciNetView ArticleGoogle Scholar
- Hausser M. Synaptic function: dendritic democracy. Curr Biol 2001;11:R10–R12. View ArticleGoogle Scholar
- Henley JM, Barker EA, Glebov OO. Routes, destinations and delays: recent advances in AMPA receptor trafficking. Trends Neurosci. 2011;34:258–68. View ArticleGoogle Scholar
- Hillen T, Othmer H. The diffusion limit of transport equations derived from velocity-jump processes. SIAM J Appl Math. 2000;61:751–75. MathSciNetMATHView ArticleGoogle Scholar
- Hillen T, Swan A. The diffusion limit of transport equations in biology. In: Preziosi L, et al., editors. Mathematical models and methods for living systems. 2016. p. 3–129. Google Scholar
- Hinch R, Chapman SJ. Exponentially slow transitions on a Markov chain: the frequency of calcium sparks. Eur J Appl Math. 2005;16:427–46. MathSciNetMATHView ArticleGoogle Scholar
- Hodgkin AL, Huxley AF. A quantitative description of membrane and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44. View ArticleGoogle Scholar
- Jung P, Brown A. Modeling the slowing of neurofilament transport along the mouse sciatic nerve. Phys Biol. 2009;6:046002. View ArticleGoogle Scholar
- Karamched B, Bressloff PC. Effects of geometry on reversible vesicular transport. J Phys A. 2017;50:055601. MathSciNetMATHView ArticleGoogle Scholar
- Karmakar R, Bose I. Graded and binary responses in stochastic gene expression. Phys Biol. 2004;1:197–204. View ArticleGoogle Scholar
- Keener JP, Newby JM. Perturbation analysis of spontaneous action potential initiation by stochastic ion channels. Phys Rev E. 2011;84:011918. View ArticleGoogle Scholar
- Keener JP, Sneyd J. Mathematical physiology I: cellular physiology. 2nd ed. New York: Springer; 2009. MATHView ArticleGoogle Scholar
- Kelleher RL, Govindarajan A, Tonegawa S. Translational regulatory mechanisms in persistent forms of synaptic plasticity. Neuron. 2004;44:59–73. View ArticleGoogle Scholar
- Kepler TB, Elston TC. Stochasticity in transcriptional regulation: origins, consequences, and mathematical representations. Biophys J. 2001;81:3116–36. View ArticleGoogle Scholar
- Kifer Y. Large deviations and adiabatic transitions for dynamical systems and Markov processes in fully coupled averaging. Mem. Am. Math. Soc.. 2009;201:944. MathSciNetMATHGoogle Scholar
- Knessl C, Matkowsky BJ, Schuss Z, Tier C. An asymptotic theory of large deviations for Markov jump processes. SIAM J Appl Math. 1985;46:1006–28. MathSciNetMATHView ArticleGoogle Scholar
- Knowles RB, Sabry JH, Martone ME, Deerinck TJ, Ellisman MH, Bassell GJ, Kosik KS. Translocation of RNA granules in living neurons. J Neurosci. 1996;16:7812–20. View ArticleGoogle Scholar
- Koeppl H, Hafner M, Ganguly A, Mehrotra A. Deterministic characterization of phase noise in biomolecular oscillators. Phys Biol. 2011;8:055008. View ArticleGoogle Scholar
- Kosik KS, Joachim CL, Selkoe DJ. Microtubule-associated protein tau (tau) is a major antigenic component of paired helical filaments in Alzheimer disease. Proc Natl Acad Sci USA. 1886;83:4044–8. View ArticleGoogle Scholar
- Labavic D, Nagel H, Janke W, Meyer-Ortmanns H. Caveats in modeling a common motif in genetic circuits. Phys Rev E. 2013;87:062706. View ArticleGoogle Scholar
- Lawley SD. Boundary value problems for statistics of diffusion in a randomly switching environment: PDE and SDE perspectives. SIAM J Appl Dyn Syst. 2016;15:1410–33. MathSciNetMATHView ArticleGoogle Scholar
- Lawley SD, Best J, Reed MC. Neurotransmitter concentrations in the presence of neural switching in one dimension. Discrete Contin Dyn Syst, Ser B. 2016;21:2255–73. MathSciNetMATHView ArticleGoogle Scholar
- Lawley SD, Mattingly JC, Reed MC. Stochastic switching in infinite dimensions with applications to random parabolic PDEs. SIAM J Math Anal. 2015;47:3035–63. MathSciNetMATHView ArticleGoogle Scholar
- Li Y, Jung P, Brown A. Axonal transport of neurofilaments: a single population of intermittently moving polymers. J Neurosci. 2012;32:746–58. View ArticleGoogle Scholar
- Lu T, Shen T, Zong C, Hasty J, Wolynes PG. Statistics of cellular signal transduction as a race to the nucleus by multiple random walkers in compartment/phosphorylation space. Proc Natl Acad Sci USA. 2006;103:16752–7. View ArticleGoogle Scholar
- Maas C, Belgardt D, Lee HK, Heisler FF, Lappe-Siefke C, Magiera MM, van Dijk J, Hausrat TJ, Janke C, Kneussel M. Synaptic activation modifies microtubules underlying transport of postsynaptic cargo. Proc Natl Acad Sci USA. 2009;106:8731–6. View ArticleGoogle Scholar
- Maeder CI, San-Miguel A, Wu EY, Lu H, Shen K. In vivo neuron-wide analysis of synaptic vesicle precursor trafficking. Traffic. 2014;15:273–91. View ArticleGoogle Scholar
- Maier RS, Stein DL. Limiting exit location distribution in the stochastic exit problem. SIAM J Appl Math. 1997;57:752–90. MathSciNetMATHView ArticleGoogle Scholar
- Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268:1503–6. View ArticleGoogle Scholar
- Matkowsky BJ, Schuss Z. The exit problem for randomly perturbed dynamical systems. SIAM J Appl Math. 1977;33:365–82. MathSciNetMATHView ArticleGoogle Scholar
- Morris C, Lecar H. Voltage oscillations in the barnacle giant muscle fiber. J Biophys. 1981;35:193–213. View ArticleGoogle Scholar
- Muller MJI, Klumpp S, Lipowsky R. Tug-of-war as a cooperative mechanism for bidirectional cargo transport by molecular motors. Proc Natl Acad Sci USA. 2008;105:4609–14. View ArticleGoogle Scholar
- Muller MJI, Klumpp S, Lipowsky R. Motility states of molecular motors engaged in a stochastic tug-of-war. J Stat Phys. 2008;133:1059–81. MathSciNetMATHView ArticleGoogle Scholar
- Naeh T, Klosek MM, Matkowsky BJ, Schuss Z. A direct approach to the exit problem. SIAM J Appl Math. 1990;50:595–627. MathSciNetMATHView ArticleGoogle Scholar
- Nagai K, Nakao H, Tsubo Y. Synchrony of neural oscillators induced by random telegraphic currents. Phys Rev E. 2005;71:036217. View ArticleGoogle Scholar
- Nakao H. Phase reduction approach to synchronization of nonlinear oscillators. Contemp Phys. 2016;57:188–214. View ArticleGoogle Scholar
- Nakao H, Arai K, Kawamura Y. Noise-induced synchronization and clustering in ensembles of uncoupled limit cycle oscillators. Phys Rev Lett. 2007;98:184101. View ArticleGoogle Scholar
- Nakao H, Arai K, Nagai K, Tsubo Y, Kuramoto Y. Synchrony of limit-cycle oscillators induced by random external impulses. Phys Rev E. 2005;72:026220. MathSciNetView ArticleGoogle Scholar
- Newby JM. Isolating intrinsic noise sources in a stochastic genetic switch. Phys Biol. 2012;9:026002. View ArticleGoogle Scholar
- Newby JM. Spontaneous excitability in the Morris–Lecar model with ion channel noise. SIAM J Appl Dyn Syst. 2014;13:1756–91. MathSciNetMATHView ArticleGoogle Scholar
- Newby JM. Bistable switching asymptotics for the self regulating gene. J Phys A. 2015;48:185001. MathSciNetMATHView ArticleGoogle Scholar
- Newby JM, Bressloff PC. Directed intermittent search for a hidden target on a dendritic tree. Phys Rev E. 2009;80:021913. View ArticleGoogle Scholar
- Newby JM, Bressloff PC. Quasi-steady state reduction of molecular-based models of directed intermittent search. Bull Math Biol. 2010;72:1840–66. MathSciNetMATHView ArticleGoogle Scholar
- Newby JM, Bressloff PC. Local synaptic signaling enhances the stochastic transport of motor-driven cargo in neurons. Phys Biol. 2010;7:036004. View ArticleGoogle Scholar
- Newby JM, Bressloff PC, Keeener JP. Breakdown of fast-slow analysis in an excitable system with channel noise. Phys Rev Lett. 2013;111:128101. View ArticleGoogle Scholar
- Newby JM, Keener JP. An asymptotic analysis of the spatially inhomogeneous velocity-jump process. SIAM J Multiscale Model Simul. 2011;9:735–65. MathSciNetMATHView ArticleGoogle Scholar
- Othmer HG, Hillen T. The diffusion limit of transport equations II: chemotaxis equations. SIAM J Appl Math. 2002;62:1222–50. MathSciNetMATHView ArticleGoogle Scholar
- Pakdaman K, Thieullen M, Wainrib G. Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv Appl Probab. 2010;42:761–94. MathSciNetMATHView ArticleGoogle Scholar
- Pakdaman K, Thieullen M, Wainrib G. Asymptotic expansion and central limit theorem for multiscale piecewise-deterministic Markov processes. Stoch Process Appl. 2012;122:2292–318. MathSciNetMATHView ArticleGoogle Scholar
- Papanicolaou GC. Asymptotic analysis of transport processes. Bull Am Math Soc. 1975;81:330–92. MathSciNetMATHView ArticleGoogle Scholar
- Paulauskas N, Pranevicius M, Pranevicius H, Bukauskas FF. A stochastic four-state model of contingent gating of gap junction channels containing two “fast” gates sensitive to transjunctional voltage. Biophys J. 2009;96:3936–48. View ArticleGoogle Scholar
- Peliti L. Path integral approach to birth–death processes on a lattice. J Phys. 1985;46:1469–83. MathSciNetView ArticleGoogle Scholar
- Pinsky MA. Lectures on random evolution. Singapore: World Scientific; 1991. MATHView ArticleGoogle Scholar
- Reed MC, Venakides S, Blum JJ. Approximate traveling waves in linear reaction-hyperbolic equations. SIAM J Appl Math. 1990;50:167–80. MathSciNetMATHView ArticleGoogle Scholar
- Roma DM, O’Flanagan RA, Ruckenstein AE, Sengupta AM. Optimal path to epigenetic switching. Phys Rev E. 2005;71:011902. View ArticleGoogle Scholar
- Rook MS, Lu M, Kosik KS. CaMKIIalpha 3′ untranslated region-directed mRNA translocation in living neurons: visualization by GFP linkage. J Neurosci. 2000;20:6385–93. View ArticleGoogle Scholar
- Rumsey CC, Abbott LF. Synaptic democracy in active dendrites. J Neurophysiol. 2006;96:2307–18. View ArticleGoogle Scholar
- Saez JC, Berthoud VM, Branes MC, Martinez AD, Beyer EC. Plasma membrane channels formed by connexins: their regulation and functions. Physiol Rev. 2003;83:1359–400. View ArticleGoogle Scholar
- Sasai M, Wolynes PG. Stochastic gene expression as a many-body problem. Proc Natl Acad Sci. 2003;100:2374–9. View ArticleGoogle Scholar
- Schnitzer M, Visscher K, Block S. Force production by single kinesin motors. Nat Cell Biol. 2000;2:718–23. View ArticleGoogle Scholar
- Schuss Z. Theory and applications of stochastic processes: an analytical approach. Applied mathematical sciences. vol. 170. New York: Springer; 2010. MATHGoogle Scholar
- Smiley MW, Proulx SR. Gene expression dynamics in randomly varying environments. J Math Biol. 2010;61:231–51. MathSciNetMATHView ArticleGoogle Scholar
- Smith GD. Modeling the stochastic gating of ion channels. In: Fall C, Marland ES, Wagner JM, Tyson JJ, editors. Computational cell biology. Chap. 11. New York: Springer; 2002. Google Scholar
- Steward O, Schuman EM. Protein synthesis at synaptic sites on dendrites. Annu Rev Neurosci. 2001;24:299–325. View ArticleGoogle Scholar
- Telley IA, Bieling P, Surrey T. Obstacles on the microtubule reduce the processivity of kinesin-1 in a minimal in vitro system and in cell extract. Biophys J. 2009;96:3341–53. View ArticleGoogle Scholar
- Teramae JN, Nakao H, Ermentrout GB. Stochastic phase reduction for a general class of noisy limit cycle oscillators. Phys Rev Lett. 2009;102:194102. View ArticleGoogle Scholar
- Teramae JN, Tanaka D. Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators. Phys Rev Lett. 2004;93:204103. View ArticleGoogle Scholar
- Thomas PJ, Lindner B. Asymptotic phase for stochastic oscillators. Phys Rev Lett. 2014;113:254101. View ArticleGoogle Scholar
- Touchette H. The large deviation approach to statistical mechanics. Phys Rep. 2009;478:1–69. MathSciNetView ArticleGoogle Scholar
- Triller A, Choquet D. Surface trafficking of receptors between synaptic and extrasynaptic membranes: and yet they do move! Trends Neurosci. 2005;28:133–9. View ArticleGoogle Scholar
- Vershinin M, Carter BC, Razafsky DS, King SJ, Gross SP. Multiple-motor based transport and its regulation by tau. Proc Natl Acad Sci USA. 2007;104:87–92. View ArticleGoogle Scholar
- Visscher K, Schnitzer M, Block S. Single kinesin molecules studied with a molecular force clamp. Nature. 1999;400:184–9. View ArticleGoogle Scholar
- Wang LCL, Ho D. Rapid movement of axonal neurofilaments interrupted by prolonged pauses. Nat Cell Biol. 2000;2:137–41. View ArticleGoogle Scholar
- Weber MF, Frey E. Master equations and the theory of stochastic path integrals. Rep Prog Phys. 2017;80:046601. MathSciNetView ArticleGoogle Scholar
- Welte MA. Bidirectional transport along microtubules. Curr Biol. 2004;14:525–37. View ArticleGoogle Scholar
- White JA, Budde T, Kay AR. A bifurcation analysis of neuronal subthreshold oscillations. Biophys J. 1995;69:1203–17. View ArticleGoogle Scholar
- White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends Neurosci. 2000;23:131–7. View ArticleGoogle Scholar
- Wilson D, Moehlis J. Isostable reduction of periodic orbits. Phys Rev E. 2016;94:052213. View ArticleGoogle Scholar
- Wong MY, Zhou C, Shakiryanova D, Lloyd TE, Deitcher DL, Levitan ES. Neuropeptide delivery to synapses by long-range vesicle circulation and sporadic capture. Cell. 2012;148:1029–38. View ArticleGoogle Scholar
- Yoshimura, K, Ara, K. Phase reduction of stochastic limit cycle oscillators. Phys Rev Lett. 2008;101:154101. View ArticleGoogle Scholar
- Zeiser S, Franz U, Wittich O, Liebscher V. Simulation of genetic networks modelled by piecewise deterministic Markov processes. IET Syst Biol. 2008;2:113–35. View ArticleGoogle Scholar
- Zhou T, Zhang J, Yuan Z, Chen L. Synchronization of genetic oscillators. Chaos. 2008;18:037126. MathSciNetView ArticleGoogle Scholar
- Zmurchok C, Small T, Ward M, Edelstein-Keshet L. Application of quasi-steady state methods to nonlinear models of intracellular transport by molecular motors. Bull Math Biol. 2017;79:1923–78. MathSciNetMATHView ArticleGoogle Scholar