 Research
 Open Access
 Published:
Datadriven inference for stationary jumpdiffusion processes with application to membrane voltage fluctuations in pyramidal neurons
The Journal of Mathematical Neurosciencevolume 9, Article number: 6 (2019)
Abstract
The emergent activity of biological systems can often be represented as lowdimensional, Langevintype stochastic differential equations. In certain systems, however, large and abrupt events occur and violate the assumptions of this approach. We address this situation here by providing a novel method that reconstructs a jumpdiffusion stochastic process based solely on the statistics of the original data. Our method assumes that these data are stationary, that diffusive noise is additive, and that jumps are Poisson. We use thresholdcrossing of the increments to detect jumps in the time series. This is followed by an iterative scheme that compensates for the presence of diffusive fluctuations that are falsely detected as jumps. Our approach is based on probabilistic calculations associated with these fluctuations and on the use of the Fokker–Planck and the differential Chapman–Kolmogorov equations. After some validation cases, we apply this method to recordings of membrane noise in pyramidal neurons of the electrosensory lateral line lobe of weakly electric fish. These recordings display large, jumplike depolarization events that occur at random times, the biophysics of which is unknown. We find that some pyramidal cells increase their jump rate and noise intensity as the membrane potential approaches spike threshold, while their drift function and jump amplitude distribution remain unchanged. As our method is fully datadriven, it provides a valuable means to further investigate the functional role of these jumplike events without relying on unconstrained biophysical models.
Introduction
Complex systems are ubiquitous in many areas of science, including biology, neuroscience, climatology, engineering, as well as in finance and social sciences [1, 2]. The common feature uniting such vastly different systems is the nonlinear interaction between their numerous microscopic constituents. This collective activity leads to the emergence of macroscopic order that cannot be reduced to microscopic properties [3]. It is often these macroscopic variables that can be measured experimentally, allowing the emergent dynamics of the system to be captured by lowdimensional data sets. The search for a macroscopiclevel representation of the system thus relies on extracting dynamical information from observed time series [4].
For some systems, the highdimensional, microscopic degrees of freedom can be well approximated by simple stochastic fluctuations. These fluctuations often participate as dynamical noise in the macroscopic evolution of the system. The lowdimensional representation of the system’s dynamics can then be expressed as a stochastic dynamical system [5]. More specifically, the observed data are usually assumed to satisfy a Langevintype stochastic differential equation (SDE). A common approach is to obtain the drift and diffusion functions of this equation by estimating the first and second Kramers–Moyal coefficient [6, 7]. The resulting model is completely datadriven and captures the core phenomenology of the original data without relying on knowledge or assumptions about the microscopic constituents of the observed system. This approach has been successfully applied in a variety of contexts, ranging from neuronal dynamics [8,9,10,11,12], heart rate variability [13, 14], turbulence [15, 16], calibration of optical tweezers [17], and others (see Ref. [5] for a review).
In all mentioned cases, however, the noise is assumed to be purely diffusive, i.e., random fluctuations with continuous sample paths. This description is incomplete if, in addition to diffusive fluctuations, large and abrupt events appear at random times throughout the time series. In this case, jumpdiffusion stochastic processes provide a more appropriate framework to model these data. Jumpdiffusion processes have been used in neuroscience as a model for the spatial [18,19,20,21] and temporal [22] organization of synaptic bombardment, in physics as a model for noisedriven transport in ratchet potentials [23,24,25], as well as in finance [26, 27] and soil moisture dynamics [28]. The Langevin approach is likely to fail if the observed system exhibits jumpdiffusion characteristics, such as skewed distributions and sudden large jumps. In such cases, and especially when the microscopic dynamics are unknown, extracting a phenomenological model from the experimental data would provide a valuable tool to probe the dynamics of the observed system, its interaction with other systems, and the interplay between diffusive and jump noise sources in shaping the observed behavior.
In this paper, we present a novel, datadriven inference method that fits a jumpdiffusion SDE to experimental time series. By detecting jumps through thresholdcrossing and by calculating the contribution of diffusive fluctuations that are falsely detected as jumps, we iteratively estimate the drift function, noise intensity, jump rate, and jump amplitude distribution. The result of this semiparametric method is a jumpdiffusion SDE that successfully fits the original data. Our method is applicable in cases where these data are stationary, with additive diffusive noise and Poisson jumps. Note that other studies have attempted to infer jumpdiffusion dynamics from data, but they rely on assuming a parametric form of the jump amplitude distribution [29,30,31], or consider only Lévy processes [32, 33].
We test our method with two validation cases, where realizations of known jumpdiffusion processes with different characteristics are used as validation data. In both cases, by using only the simulated time series, we precisely recover the correct parameters and functions used in the original simulations. We also compare the autocorrelation functions (ACF) of the validation data and of fitted SDE. We then apply our method to recordings of intrinsic membrane voltage fluctuations in pyramidal neurons of electric fish. These recordings contain sudden and unpredictable jumplike events that occur among more typical diffusive membrane noise. Although the exact biophysical origin of these fluctuations is unknown, we find that the recordings are well fitted by jumpdiffusion processes. We evaluate goodness of fit quantitatively by comparing the observed and the estimated probability density functions (PDF) and power spectral densities (PSD). Interestingly, we find that some pyramidal cells increase their jump rate and noise intensity as the membrane potential approaches spike threshold, while their drift function and jump amplitude distribution remain unchanged.
In Sect. 2, we present the various steps involved in the inference procedure, including the detection scheme, the calculation relating to false positive, and the iterative approach. In Sect. 3, we validate this procedure against one pure diffusion and two jumpdiffusion test cases, and then we apply it to neurophysiological recordings of membrane noise. This is followed by a discussion on the possible generalizations of our method and on future work with the experimental data (Sect. 4).
Methods
Definitions and overview
Let \(\{X(t)\}\) represent data in the form of a stationary time series. In what follows, \(\{X(t)\}\) is obtained either from experimental observations or from numerical simulation. In either case, the situation of interest here is when \(\{X(t)\}\) exhibits both diffusive fluctuations and abrupt events (henceforth called jumps). From \(\{X(t)\}\), our goal is then to fit a jumpdiffusion SDE of the form
where F is the drift function, D is the noise intensity, and \(W(t)\) is a Wiener process (i.e., Brownian motion). Here \(J(t)\) is a compound Poisson process representing the jumps
where \(N_{\lambda }(t)\) is a Poisson point process with rate λ, and the \({B_{i}}\)’s are the independent and identically distributed jump amplitudes drawn from a distribution \(Q_{B}\). For a small enough sampling interval, or time step Δt, jumps will occur with probability \(\varGamma _{B} \equiv \lambda \Delta t\).
Although the method developed herein is applicable to a wide array of experimental data types, we do, however, impose certain conditions on the underlying dynamical process. Notably, we limit our analysis to systems where the dynamics are stationary (in the strict sense), the diffusion noise is additive, the jumps have positive amplitudes (\(B_{i}>0\)), and where the Poisson rate λ is constant in time and small enough so that \(\varGamma _{B} \ll 1\). Furthermore, we assume here that F is continuous and that a single stable fixed point arises from the deterministic part of the dynamics, but our method could be generalized to multistable systems. Except for this restriction, we assume no particular shape for the drift function as long as it generates a stationary process. Finally, we assume that, on average, jump amplitudes are greater than the typical magnitude of diffusive increments: \(\mathrm{E}[B_{i}] > (2D\Delta t)^{\frac{1}{2}}\) by one order of magnitude of more.
In terms of the time series itself, we assume that the data were sampled at a highfrequency, such that Δt can be assumed to be small with respect to the total duration of the time series. Note that the value of Δt is set from the experiments that produced the data, and so is not a variable we can control. For experimental data, however, it is possible that the Markov property breaks down on the timescale of individual observations [5], but we assume that, in that case, the time series can be downsampled to a timescale where the Markov property then holds.
Furthermore, in the cases considered below, jumps appear unambiguously in the data, and so a jumpdiffusion approach is warranted. In situations where this is not the case, the presence of jumps can be assessed from the fourthorder Kramers–Moyal coefficient, which is nonvanishing for processes with discontinuities [31, 34]. Lastly, note that we make no assumptions regarding the structure of the ACF of \(\{X(t)\}\); we only assume stationarity in these data. In fact, we compare the ACFs (and PSDs) of time series generated by the fitted SDE with those of the original data as a means of validating the inference procedure.
Below we develop a datadriven inference procedure that successfully generates the estimates D̂, λ̂, F̂, and \(\hat{Q}_{B}\), where the unknown functions F and \(Q_{B}\) are estimated nonparametrically. This inference procedure results in a fitted stochastic process \(Y(t)\) that is an adequate model of the original data. In this sense, we implicitly assume that the data \(\{X(t)\}\) are sampled from a realization of \(Y(t)\). In our calculations, we thus associate the firstorder equilibrium PDF of \(Y(t)\), \(P_{Y}\), to the empirical PDF of \(\{X(t)\}\) (obtained by kernel density estimation) \(P_{X}\).
Our approach is predicated on the detection of jumps in the data via the application of a threshold \(\theta ^{*}>0\) on the increments \(\Delta X(t+\Delta t) \equiv X(t + \Delta t)  X(t)\). This procedure creates a pool of detected jumps with various amplitudes. Let \(Q_{C}\) be the empirical PDF estimated from these measured jump amplitudes. Also, let n be the total number of increments in the time series, and m be the number of increments whose value is greater than \(\theta ^{*}\). We define the (overall) jump detection probability as \(\varGamma _{C} \equiv \operatorname{Prob} \{ \text{detecting an increment larger than }\theta ^{*}\text{ across an interval }\Delta t \}\), which we estimate from the data as \(\varGamma _{C} = \frac{m}{n}\).
An inherent challenge with this thresholdcrossing approach is that, in addition to the true jumps generated by the compound Poisson process, we also unavoidably detect large diffusive fluctuations that are falsely identified as jumps, henceforth called false positives (FP; Fig. 1, top). A direct estimation of the true jump rate λ and of the true jump amplitude distribution \(Q_{B}\) is thus impossible because the detected jump pool consists of a mixture of true jumps and false positives. Our main contribution, and the central component of our inference procedure, is the calculation of FPrelated statistics, namely the FP detection probability \(\varGamma _{A}\) ≡ Prob{detecting a diffusive increment larger than \(\theta ^{*}\) across a given time step Δt} and the distribution from which FP amplitudes are drawn \(Q_{A}\). Once \(\varGamma _{A}\) and \(Q_{A}\) are calculated, we then extract λ (or, equivalently, \(\varGamma _{B}\)) and \(Q_{B}\) from \(\varGamma _{C}\) and \(Q_{C}\). Note that the subscripts “A”, “B”, and “C” will hereafter refer to FPs, true jumps, and both combined, respectively. More precisely, we measure quantities in “C” from the detected jump pool, we calculate FP statistics in “A”, and we seek the true jump statistics in “B”.
Choice of threshold
To ensure that a minimal number of true jumps are missed during the jump detection procedure, the threshold \(\theta ^{*}\) should ideally be set as low as possible. If set too low, however, the number of FPs becomes so large that the statistics of true jumps, i.e., λ and \(Q_{B}\), cannot be extracted from the statistical fluctuations of \(Q_{C}\) and \(\varGamma _{C}\). We thus aim for an intermediate value of \(\theta ^{*}\) that captures most true jumps while allowing a manageable number of FPs. This is done by relying on our assumption of positive jump amplitudes and, more precisely, by exploiting the asymmetry between positive and negative increment statistics. Note that the threshold depends implicitly on the time step of the original time series: a smaller Δt means that diffusive fluctuations are smaller, and thus a smaller value of \(\theta ^{*}\) can be used.
Let \(\{\Delta X_{+}\}\) and \(\{\Delta X_{}\}\) be the sets of positive and negative increments of \(\{X(t)\}\), respectively (Fig. 2(A)). In addition, let \(M_{+}(\theta ) = \{\Delta X _{+}: \Delta X_{+} > \theta \}\) and \(M_{}(\theta ) = \{\Delta X_{}: \Delta X_{} > \theta \}\) be the reduced sets truncated by θ, where θ spans the common range of \(\{\Delta X_{+} \}\) and \(\{\Delta X_{}\}\). We use the difference between the sample mean of these sets \(\overline{M_{+}}  \overline{M_{}}\) as a function of θ to quantify the relative importance of true jumps with respect to diffusive fluctuations, over different increment sizes. The value of θ for which \(\overline{M_{+}}  \overline{M_{}}\) is a maximum corresponds to the greatest separability between positive and negative increments. We find, however, that using the inflection point, i.e., where the second derivative becomes 0, located to the left of this maximum (Fig. 2(B), asterisks) is a better choice of threshold. This slightly lower value retains a greater range of true jumps, which is desirable, while avoiding the inclusion of an overwhelmingly large number of FPs. Choosing this inflection point rather than the maximum impacts primarily the estimation of λ, since it relies on the proper detection of true jumps in the time series. A slightly higher value of \(\theta ^{*}\) will introduce a bias in λ̂. For instance, in the first jumpdiffusion validation case that we consider in Sect. 3.2, choosing the maximum as the threshold roughly yields a 1% increase in the error on λ̂ compared to when the inflection point is used. Our approach for setting \(\theta ^{*}\) is thus motivated by the fact that it is advantageous to choose a value as small as possible for \(\theta ^{*}\), and the inflection point in the curve of \(\overline{M_{+}}  \overline{M_{}}\) provides a reliable way to achieve this.
Jump detection
Here we describe how the threshold is applied to the increments in order to generate the detected jump pool. We apply a detection scheme tailored specifically to handle two aspects that we observe in experimental data with jumps, and it is inspired by the method used in [35]. Firstly, if the data are resolved on a fine enough time scale, jumps may last longer than a single sampling interval. Secondly, jumps need not be followed immediately by negative increments. In data, and in some simulations as well, the diffusive increments following a jump can still be positive, but we seek a method for identifying when the jump actually ends (Fig. 1, inset). These two considerations shape the method used to calculate the FP amplitude distribution in Sect. 2.4.2.
When a given increment is larger than the threshold, a jump onset time \(T_{\mathrm{on}}\) is registered, and if the next increment is below threshold, the associated offset time \(T_{\mathrm{off}}\) is registered (even if this increment is positive). This defines a jump of duration \(T_{\mathrm{off}}T_{\mathrm{on}} = \Delta t\), where Δt is the sampling interval of the data. We henceforth refer to this type of jump as a singlet, as it spans the duration of a single time step. In contrast, if two or more successive increments are above threshold, then the jump is of duration \(2\Delta t\) or more, which we refer to as a doublet, triplet, and so forth. In other words, the jump offset time is only registered at the end of the sequence of abovethreshold increments. With the onset and offset times identified, a jump amplitude is defined as the difference between \(X(T_{\mathrm{off}})\) and \(X(T_{\mathrm{on}})\).
FP and true jump statistics
Here we present the calculations of the FP detection probability \(\varGamma _{A}\) and of the FP amplitude distribution \(Q_{A}\). We then show how these quantities are used to extract the true jump rate λ and the true jump amplitude distribution \(Q_{B}\) from the detected jump probability \(\varGamma _{C}\) and the detected jump amplitude distribution \(Q_{C}\). For the calculations in this subsection, we assume that the drift function F and noise intensity D are known. In the next subsection, we show how these calculations can be incorporated into an iterative scheme that allows the simultaneous estimation of all unknowns, including F and D. Furthermore, note that the FPrelated calculations involve only the diffusive part of Eq. (1) since, by definition, FPs occur during the purely diffusive segments between true jumps. The calculations in Sects. 2.4.1 and 2.4.2 thus pertain only to diffusive increments \(\Delta Y^{\mathrm{diff}}(t)\). Finally, note that we perform validation tests of the calculations of \(\varGamma _{A}\) and \(Q_{A}\) in Sect. 3.1.
FP detection probability
As mentioned in Sect. 2.1, sampling a jumpdiffusion process such as Eq. (1) at finite intervals and applying a threshold on the observed increments leads to the detection of FPs, i.e., diffusive (rather than true jump) increments larger than the threshold. Importantly, these FPs occur with a probability that depends on the value of the process at the start of the interval. Let this conditional detection probability be defined as
As α does not depend explicitly on time, this definition relies on our assumption that \(Y(t)\) is stationary. The ydependence arises from the drift function F. Indeed, if the drift function is positive (respectively, negative) at a given time, it biases diffusive fluctuations toward (away from) the threshold. This translates into an FP detection probability that assumes higher values when \(F(y)>0\) than when \(F(y)<0\). We now turn to the explicit calculation of \(\alpha (y)\).
Let \(\varXi _{\Delta YY}(\xi  Y(t)=y)\) denote the PDF of \(\Delta Y^{\mathrm{diff}}(t+\Delta t)\) conditioned on the value of the process at the start of the interval, and where ξ assumes the possible values of the increments. Note that, because the time step remains constant, it is always implied that the increments are defined across an interval Δt. Given that we assume Δt to be sufficiently small, we approximate \(\varXi _{\Delta YY}\) as the shorttime propagator of the Fokker–Planck equation [36, 37] associated with the diffusive part of jumpdiffusion process (recall that what concerns us here are the purely diffusive segments between the true jumps of \(Y(t)\)). We thus have \(\Delta Y^{\mathrm{diff}}(t+\Delta t) \approx {\mathcal{N}} (F(Y(t))\Delta t,2D\Delta t)\) and
that is, a Gaussian distribution with mean \(F(y)\Delta t\) and variance \(2D\Delta t\).
For the test cases presented in Sect. 3 (with \(\Delta t = 10^{4}\text{ s}\)), we have validated this approximation by comparing it with numerical solutions of the associated Fokker–Planck equation solved at a finer temporal resolution (\(\Delta t/1000\)) across the time step Δt. The numerical solutions were indeed well fitted with the approximation in Eq. (4) (not shown). Numerical integration was performed with a custom partial differential equation solver that implements a finite volume discretization along with the fully implicit Euler scheme. The advective term was treated with the upwind scheme, and a linear interpolation profile for the spatial derivative was applied to the diffusive term. The resulting algebraic equation was solved with the tridiagonal matrix algorithm [38].
Once the conditional PDF of the increments is evaluated with Eq. (4), we calculate the conditional FP detection probability, given that the process starts at y, as follows:
that is, the probability of observing an increment larger than \(\theta ^{*}\) starting at y. Finally, the unconditional FP detection probability is calculated based on the empirical PDF of \(\{X(t)\}\), \(P_{X}\):
We validate these calculations in Sect. 3.1.
FP amplitude distribution
We now proceed with the calculation of \(Q_{A}\), i.e., the distribution from which FP amplitudes are drawn. First, recall that our detection scheme allows for jumps of different durations (Sect. 2.3). As such, the detection of an FP implies either a succession of abovethreshold increments (e.g., Fig. 3(A)) or, at least, a single abovethreshold increment. Let \(T^{\mathrm{FP}}_{\mathrm{on}}\) denote the FP onset time, that is, the time at the start of the first abovethreshold increment, and let \(Y_{0} \equiv Y(T^{\mathrm{FP}}_{\mathrm{on}})\) be referred to as the starting value of the FP. Moreover, let τ denote the FP duration, an integer multiple of Δt, such that \(\tau = \Delta t\) corresponds to an FP singlet, \(\tau = 2\Delta t\) to an FP doublet, and so forth.
In order to calculate \(Q_{A}\), let us first identify the factors that influence FP amplitudes. Firstly, it must be noted that the FP amplitudes will exhibit a similar ydependence as that discussed in the preceding section. Indeed, an increment starting at \(Y_{0}\equiv Y(T^{\mathrm{FP}}_{\mathrm{on}})=y\) will tend to be larger when \(F(y)>0\) than when \(F(y)<0\). Secondly, the amplitude of an FP will also depend on its duration, τ. For instance, the three increments of a triplet FP will summate and tend to have a larger amplitude than that of a singlet. The FP amplitudes A will thus covary with \(Y_{0}\) and with τ, but note that τ also depends on \(Y_{0}\). Indeed, longer FPs will tend to occur where the drift function is more positive, and vice versa. To account for these dependencies, let us define the trivariate random variable \(\{A, \tau , Y_{0} \}\), distributed according to its joint PDF \(P_{A,\tau ,Y_{0}}(a,i\Delta t,y)\), where we explicitly write τ as an integer multiple of Δt. What we seek then is the marginal:
where the sum extends over all possible FP durations, and where \(a>0\) represents all possible amplitudes. From the definition of conditional PDFs, we can expand the joint PDF as follows:
where \(P_{\tau Y_{0}}(i\Delta ty)=\operatorname{Prob} (\text{detecting an FP of duration } i\Delta t\text{, given the starting value } y )\) is the conditional probability mass function of the FP duration τ. We can thus write Eq. (7) as^{Footnote 1}
The sum in the large parentheses is a function of y, and Eq. (9) is merely calculating its average with respect to the starting value \(Y_{0}\). This sum can further be interpreted as a socalled mixture distribution: consider a collection of random variables, one of which is chosen according to a certain probability (its mixture weight) and is then realized according to its own PDF (its mixture component). The outcome of this experiment is itself a random variable whose PDF is called a mixture distribution and is expressed as a sum over the PDFs of the random variables in the collection, weighted by their respective probabilities. In our case, for a fixed value of \(Y_{0}\), an FP duration is drawn according to a countable set of mixture weights \(P_{\tau Y_{0}}\), and an FP amplitude is then realized according to the associated mixture component \(P_{A\tau ,Y_{0}}\). In practice, the sum will be truncated after the first few terms because the subsequent mixture weights become negligible.
From Eq. (9), we see that in order to arrive at the desired \(Q_{A}\), the functions \(P_{A\tau ,Y_{0}}\), \(P_{\tau Y_{0}}\), and \(P_{Y_{0}}\) must first be calculated. Let us first consider the latter. Because \(Y_{0}\) represents, by definition, the value of \(Y(t)\) at the start of an abovethreshold increment, we can express its PDF in terms of the joint PDF of \(\Delta Y^{\mathrm{diff}}(t+\Delta t)\) and \(Y(t)\):
where K is a normalization constant and where, in the last line, we have replaced \(P_{Y}\) by the empirical PDF of \(\{X(t)\}\). Note that we integrate with \(\theta ^{*}\) as a lower bound in order to enforce that \(Y_{0}\) is associated with the onset of an abovethreshold increment. Let us now consider the calculation of \(P_{A\tau ,Y_{0}}\).
In what follows, we simplify the notation by labeling time with the index i, such that \(i=0\) represents the time \(T^{\mathrm{FP}}_{\mathrm{on}}\), \(i=1\) the time \(T^{\mathrm{FP}}_{\mathrm{on}}+\Delta t\), \(i=2\) the time \(T^{\mathrm{FP}}_{\mathrm{on}} + 2\Delta t\), and so forth. With this notation, \(Y_{i} \equiv Y(T^{\mathrm{FP}}_{\mathrm{on}} + i\Delta t)\) denotes the ith point following the FP onset time, and \(\Delta Y_{i}^{\mathrm{diff}} \equiv Y_{i}  Y_{(i1)} \) the ith diffusive increment. Furthermore, our focus here is on FPs of duration, say \(i\Delta t\), which corresponds to a sequence of i successive abovethreshold increments. As such, the forthcoming calculations involve PDFs that are implicitly conditioned on the event \(\{\Delta Y^{\mathrm{diff}}_{n} > \theta ^{*}, \forall n\le i\}\).
For an FP of duration \(\tau = i\Delta t\) starting at \(Y_{0}\), we define its amplitude as \(A = Y_{i}  Y_{0}\), and we seek the conditional PDF \(P_{A\tau ,Y_{0}}\). For this purpose, let \(\rho _{i}(y)\equiv P_{Y _{i}Y_{0}}(yy_{0})\) denote the PDF of \(Y_{i}\), \(i>1\), conditioned on \(Y_{0}\). Since A is expressed as the difference between \(Y_{i}\) and \(Y_{0}\), we can directly write
The \(\rho _{i}\)’s, for \(i>1\), are evaluated sequentially based on the fact that \(Y_{i}=\Delta Y^{\mathrm{diff}}_{i} + Y _{i1}\). The PDF of this sum, conditioned on \(Y_{0}\), gives
To enforce the condition of abovethreshold increments, \(\{ \Delta Y^{\mathrm{diff}}_{n} > \theta ^{*}, \forall n\le i\}\), we evaluate \(P_{\Delta Y_{i}Y_{i1}}\) based on Eq. (4), but we truncate the distribution below \(\xi =\theta ^{*}\):
where K is a normalization constant. With Eq. (13) and (12), we finalize the calculation of \(P_{A\tau ,Y_{0}}\) in Eq. (11). In Fig. 3(B), we see the representation of the \(\rho _{i}\) for an FP triplet. From \(\rho _{i}\), we can also calculate the PDF of \(\Delta Y^{\mathrm{diff}}_{i}\), conditioned on \(Y_{0}\) (this will be useful in the calculation of \(P_{\tau Y_{0}}\)). Let this PDF be defined as \(\varXi _{i}(\xi ) \equiv P_{\Delta Y_{i}Y_{0}}(\xi  y_{0})\). We calculate it as a marginal over \(Y_{i1}\):
where the dependence of \(\varXi _{\Delta Y_{i}Y_{i1}}\) on \(Y_{0}\) disappears because of the Markov property. In Fig. 3(C), we see the representation of the \(\varXi _{i}\)’s for an FP triplet.
We now turn to the calculation of \(P_{\tau Y_{0}}\), the probability of observing an FP of duration τ, conditioned on the starting value \(Y_{0}\). We are interested in the conditional probability of the event \(\{\tau = i\Delta t\}\), where i is an integer. This event is equivalent to the intersection of the events \(E_{1} \equiv \{\Delta Y _{1}^{\mathrm{diff}} > \theta ^{*}\}, E_{2} \equiv \{\Delta Y_{2}^{\mathrm{diff}} > \theta ^{*}\},\ldots\) and \(E_{i+1}' \equiv \{\Delta Y_{i+1}^{\mathrm{diff}}\le \theta ^{*}\}\). In other words, we obtain an FP of duration \(i \Delta t\) when the first i increments are above threshold, but the \((i+1)\)th increment is below threshold.
By successively applying the definition of conditional probability, we can expand \(P_{\tau Y_{0}}\) as follows:
Let \(Z_{i}(y_{0})\equiv \operatorname{Prob} [E_{i}  ( \bigcap_{n=1}^{i1} E_{n} ), y_{0} ]\), \(i>1\), represent the probability that the ith increment is above threshold, given that the \(i1\) previous increments were also above threshold, and given the starting value \(y_{0}\). These \(Z_{i}\)’s can be calculated from the \(\varXi _{i}\)’s of Eq. (14) as (Fig. 3(B), shaded area):
We now arrive at the desired probability mass function:
where we have used the fact that \(1Z_{i+1}\) is equal to the probability that the \((i+1)\)th increment is below threshold, and where we have defined \(Z_{1}(y_{0}) \equiv \operatorname{Prob} (E_{1}  y_{0} )= \alpha (y_{0})\), i.e., the probability that the first increment after \(Y_{0}\) is above threshold. Once Eq. (10), (11), and (16) are evaluated, we apply Eq. (9) to obtain the desired \(Q_{A}\). Using this approach, we obtain an excellent agreement between theory and simulations, as reported in Sect. 3.1.
True jump rate
Our estimate of the true jump rate λ relies on the knowledge of the overall jump detection probability \(\varGamma _{C}\) and on the FP detection probability \(\varGamma _{A}\) (both defined in Sect. 2.1). Recall that we calculate \(\varGamma _{A}\) from Eq. (6), while we estimate \(\varGamma _{C}\) directly from the data as \(m/n\), where m is the number of time steps with \(\Delta X(t) > \theta ^{*}\) (either from a true jump or an FP) and n is the total number of time steps in the data time series. On the other hand, from the definition of \(\varGamma _{C}\) we can write:
where \(\varGamma _{B} = \lambda \Delta t\) is the probability of observing a true jump in an interval Δt. This is merely a statement of the addition law of probability, which would read: the probability of detecting a jump in an interval Δt is the sum of the probability of observing a true jump, plus that of observing an FP, minus the probability of observing both at the same time, where we use the fact that FPs and true jumps are independent events. For the test cases described in Sect. 3.2, isolating λ in Eq. (17) is accurate up to an error of 0.02%.
True jump amplitude distribution
As in the previous subsection, we obtain an estimate for the true jump amplitude distribution \(Q_{B}\) based on the empirical PDF of jump amplitudes measured from the time series \(Q_{C}\) and on the calculated FP amplitude distribution \(Q_{A}\). Because detected jumps are a mixture of true jumps and FPs, we can write, a priori,
where \(W_{A} = \frac{\varGamma _{A}}{\varGamma _{C}}\) is the probability that a detected jump is an FP, and \(W_{B}= \frac{\varGamma _{B}}{\varGamma _{C}}\) that it is a true jump. The subtlety here is that, contrary to FPs, true jumps are never detected on their own, as they always summate with a diffusive fluctuation. In other words, we never observe the \(B_{i}\)’s directly, but rather the \(B_{i}\)’s plus a diffusive increment. Over a short enough time step, diffusive increments are Gaussian variables and are approximately independent of each other. For the purpose of calculating \(Q_{B}\), we will thus assume these increments are Gaussian with mean zero and variance \(2D \Delta t\). Properly accounting for the ydependence of the mean would be more precise, but would require \(Q_{C}\) to be broken down into a family of distributions parameterized by y, which would require a very large number of detected jumps in the data.
Let Ξ̃ represent a Gaussian distribution with zero mean and variance \(2D\Delta t\). The \(Q_{B}\) in Eq. (18) should thus be replaced by the convolution \(\tilde{\varXi }\ast Q_{B}\). Furthermore, \(W_{A}\) must in fact be reduced by a factor \((1\varGamma _{B})\) to account for the probability that FPs can occur during the same interval as a true jump. This leads to
From this equation, we isolate the convolution term and apply the basic deconvolution algorithm [39] to extract \(Q_{B}\). Let f be a measured, convolved signal, where the convolution kernel h is known. We seek the intact signal g, such that \(f = (h \ast g)\). We compute an estimate of g at each iteration as \(g_{k+1} = g_{k} + [f  (h \ast g_{k})]\), with \(g_{0} = f\). The algorithm converges once the correct signal is reached, since the residual between f and \((h \ast g_{k})\) then becomes zero.
Iterative procedure, noise intensity, and drift function
We now turn to the problem of the simultaneous datadriven estimation of all the unknowns in Eq. (1). To this end, we incorporate the calculations of Sect. 2.4 in the iterative scheme depicted in Fig. 4, which consists of three main branches. The first two initial branches, I and II, are independent and are performed only once; this is followed by branch III where the iterations take place. In branch I, the threshold is set (Sect. 2.2) and then applied to the time series to yield the detected jump pool, from which \(\varGamma _{C}\) and \(Q_{C}\) are obtained (Sect. 2.3). The noise intensity is estimated in branch II, along with an initial guess for the drift function \(\hat{F}_{1}\), which are both used in the first iteration of branch III to calculate \(\hat{\varGamma }_{B}\) and λ̂ (Sect. 2.4). The last step in branch III uses D̂, \(\hat{\varGamma }_{B}\), and λ̂ to estimate the drift function F̂, which is fed back to the first step of branch III in order to iteratively refine the estimation procedure.
Noise intensity
As depicted in Fig. 4, the estimation of D does not rely on the value chosen for \(\theta ^{*}\). It does, however, still use the notion of applying a threshold on the increments. Indeed, calculating D̂ relies on partitioning \(\{X(t)\}\) into mostly jumpfree segments, the length and number of which is sensitive to the value of the threshold used for detection: the lower the threshold, the more jumps are detected (some of which are FPs) and the shorter these partitions are, which, as explained below, can skew the estimate of D. A high threshold, on the other hand, leaves a significant number of true jumps in those segments. The goal here is thus to vary the threshold θ in order to obtain the optimal estimate of D.
In the limit of an infinitesimally small sampling interval, \(\Delta t \rightarrow 0\), the quadratic variation \([Y^{\mathrm{diff}}(t)]\) of a pure diffusion process converges to the socalled integrated variation, which, for additive and timeindependent noise, gives [40, 41]
where \(T = (n1)\Delta t\) is the total duration for the n samples of \(\{X(t)\}\). We can, therefore, estimate D via the sample quadratic variation, also known as realized variance, \(RV(t)\) [40, 41]:
For instance, for a test diffusive process with \(F(y)=\mathrm{0.2}y\) and \(D=\mathrm{0.15}\) and sampled at \(\Delta t = \mathrm{0.01}\mbox{ s}\) (\(N=10^{6}\)), Eq. (21) estimates D with an error of 0.002%. In contrast, applying Eq. (21) to a realization of a jumpdiffusion process returns an overestimated D̂, as expected due to large, nondiffusive, positive increments that populate \(\{X(t)\}\). Also note that if the threshold is set low enough to detect the smallest true jumps, in general, it can also remove the largest diffusive increments, which are required to properly apply Eq. (21). Simply removing the detected jumps from the sum in Eq. (21) would, therefore, yield an underestimated D̂.
To circumvent this problem, we consider only the negative increments of \(\{X(t)\}\) in the calculation of D̂, as they will remain essentially unaffected by the presence of positive jumps, with the exception of a short transient following the jump offset. Indeed, following each jump, we expect to see a brief period where the process is out of equilibrium. And since the jumps have positive amplitudes, negative increment statistics are biased toward negative values during this transient (e.g., Fig. 1). The calculation of D̂ is thus based on applying Eq. (21) to jumpfree segments of \(\{X(t)\}\), but only including negative increments, and neglecting the initial transient at the start of each segment (the duration of which is determined below). This is repeated for various values of θ. The successful estimation of D based only on negative increments relies on our assumption that Δt is small, for in this case increments are approximately independent and distributed as \(\mathcal{N}(0,2D\Delta t)\). For larger Δt, the increment PDF can become asymmetric, meaning that the statistics of negative increments differ from those of positive ones, which would cause errors in our estimation of D.
Let \(T_{\mathrm{off}}\) and \(T_{\mathrm{on}}\) denote the jump offset and onset times, respectively. Note that the values of these times and the number of detected jumps all depend on the specific value of the threshold. Then the ith segment is defined by \(\{S(t)\}_{i} = \{X(t) : T_{\mathrm{off}}(i) < t < T_{\mathrm{on}}(i+1)\}\) and is of duration \(T_{i} = T_{\mathrm{on}}(i+1)  T_{\mathrm{off}}(i) \), and let \(\{\Delta S(t)\}_{i}\) be its \(n_{i}\) increments. Out of these \(n_{i}\) increments, we keep only the \(n_{i}^{}\) that are negative and that occur after the transient of approximate duration Φ. We are thus left with the following subset of increments from each segment:
For each segment, we obtain an estimate \(\hat{D}_{i}\) as follows:
where \(T_{i}^{} = n_{i}^{} \Delta t \) is the effective duration of the combined \(n_{i}^{}\) negative increments. We then calculate D̂ as an average of the \(\hat{D}_{i}\)’s, weighted by \(T_{i}/T\).
More precisely, here are the steps taken in order to arrive at D̂:
Starting from the largest value of \(\{\Delta X(t)\}\), lower the threshold until the largest 5% of jumps are detected, which are the ones with the most prominent transient.
Let \(\{S^{*}(t)\}^{i}\) be the segments that follow these jumps (Fig. 5(A)). Average across them for each time step, creating a jumptriggered average trace (Fig. 5(A)), black line).
Identify Φ as the approximate moment when the jumptriggered average stabilizes, quantified here as when its derivative is less than 0.05 (results do not depend strongly on this particular value: changing Φ by one order of magnitude on either side of the value used here yields estimates of D that differ by less than 0.2%). This gives us an estimate for the maximum time scale required for postjump equilibrium.
With Φ determined, and for each value of the threshold, extract the \(\{S(t)\}_{i}\)’s (Fig. 5(B)) and apply Eq. (23) to their negative increments for which \(t > \varPhi \) (neglecting segments for which \(T_{i}<\varPhi \)).
This scheme results in an estimate D̂ for each value of the threshold, and the lowest value is chosen as the optimal estimate (Fig. 5(C)). This is because this approach overestimates D on both ends of the range of threshold values, but for different reasons. For low θ, many more jumps are detected, and this makes the \(\{S(t)\}_{i}\)’s shorter, which means that the associated D̂ is more prone to be biased by the residual of the transient. On the other hand, a very high threshold leaves significant jumps in the \(\{S(t)\}_{i}\)’s, which biases the statistics of negative increments. The optimal balance is reached somewhere inbetween, where the segments are large enough so that the initial transient is negligible, and where the jumps that are inevitably left in the segments do not significantly alter the statistics of negative increments. The heuristic choice of an intermediate value, namely one that corresponds to the minimal D̂ estimate, gives excellent results in both validation cases (less than 0.1% error, see Table 2).
Drift function
Our estimation of the drift function F relies on the differential Chapman–Kolmogorov equation [42], which describes the evolution of the transition probability of a stochastic process where jumps occur alongside diffusive fluctuations. Let \(Y(t)\) be a jumpdiffusion process with transition probability \(P_{YY_{0}}\). For the case of positive Poisson jumps and additive diffusive noise, the differential Chapman–Kolmogorov equation reduces to (see the Appendix)
If \(Y(t)\) is assumed to have reached its equilibrium state, then the lefthand side vanishes, and in the righthand side we can replace the transition probability with the firstorder equilibrium PDF, \(P_{Y}\),^{Footnote 2} which we assume to be equal to the empirical PDF, \(P_{X}\), of the measured time series \(\{X(t)\}\). The drift function can then be evaluated from Eq. (24) if D, λ, and \(Q_{B}\) are known (or estimated).
Note, however, that F̂ is required in the first step of branch III of the iterative procedure (Fig. 4), since the FPrelated statistics, \(\varGamma _{A}\) and \(Q_{A}\), are calculated based on the drift function. A preliminary estimate \(\hat{F}_{1}\) of the drift function is thus required. This particular estimate, which is needed only once throughout the inference procedure, is obtained by letting \(\lambda = 0\) in Eq. (24), such that it becomes the Fokker–Planck equation associated with the diffusive part of the stochastic process. The stationary solution of this Fokker–Planck equation can be used to establish a relation between the noise intensity, the drift function, and \(P_{Y}\) [37, 43]:
where K is a normalization constant and where, again, we assume that \(P_{Y}=P_{X}\). This first preliminary estimate is necessarily flatter than the true F, as the presence of jumps makes \(P_{X}\) wider than it would be if there were no jumps. Successive iterations gradually rectify this by incorporating estimates of λ and \(Q_{B}\) in Eq. (24).
Results
Here we present three applications of the method developed above. First, we validate the calculation of \(\varGamma _{A}\) and \(Q_{A}\) for the case of a purely diffusive process. Then we apply the full iterative scheme to two simulated jumpdiffusion processes with different characteristics. Finally, we apply our inference method to electrophysiological recordings in pyramidal cells of electric fish.
Validation of the FP statistics calculations
To confirm that the calculations of \(Q_{A}\) and \(\varGamma _{A}\) are accurate, we start with a simple test case where we consider a time series \(\{X^{\mathrm{diff}}(t)\}\) obtained from a simulated pure diffusion process. As there are no jumps here, the distribution of increments does not possess the necessary asymmetry to properly identify a threshold. For this test case only, we thus opt for a specific value, \(\theta ^{*}=0.1\), that showcases the ability of our method to handle FPs of various durations. The results presented here, however, remain valid for a range of values of \(\theta ^{*}\). For the parameters used in this pure diffusion validation case (Table 1), this range extends from 0.025 up to 0.2. The upper limit is set by the fact that, beyond it, too few FPs are detected, which precludes any statistical calculations from being achieved. The lower limit, on the other hand, arises because too many FPs are detected, such that, for instance, they occur every other time step. In such a case, Δt is too large and the estimation of λ becomes imprecise due to the statistical fluctuations in the number of detected FPs.
Applying the threshold in this case leads to a detected jump pool comprised entirely of FPs. The goal now is to compare the measured \(Q_{C}\) and \(\varGamma _{C}\) with the calculated \(Q_{A}\) and \(\varGamma _{A}\). If we obtain that \(\varGamma _{C} \approx \varGamma _{A}\) and that \(Q_{C} \approx Q_{A}\), then we will effectively have shown that the true jump rate is zero, \(\lambda = 0\), and that our method correctly calculates the FP amplitude distribution. In this pure diffusion test case, these calculations rely on the knowledge of the correct noise intensity D and the correct drift function F, but this will not be the case in subsequent sections.
With the particular values of D and F used here to simulate \(\{X^{\mathrm{diff}}(t)\}\) (Table 1), we find that FPs are either singlets, doublets, or triplets, which contribute differently to the measured amplitude distribution \(Q_{C}\). Indeed, the fact that longer FPs tend to have larger amplitudes and that FP durations are always multiples of Δt creates distinct modes in the measured amplitude distribution (Fig. 6(A), blue histogram). By taking the sum out of the integral in Eq. (9), we can write \(Q_{A}(a) = \sum_{i=1}^{\infty }Q_{i}(a)\), where
are the individual distributions associated with FPs of duration \(i\Delta t\) (Fig. 6(A), yellow curves). These distributions are then summed to obtain \(Q_{A}\), which is a precise match with \(Q_{C}\) for this purely diffusive example (Fig. 6(A), black curve).
Furthermore, by applying Eq. (5) we calculate the ydependent detection probability, \(\alpha (y)\) (Fig. 6(B), yellow curve). As expected, this function depends nontrivially on y and reflects the nonlinearity of the specific drift function used in this example. If multiplicative noise had been used, the noise intensity \(D(y)\) would also have influenced the shape of \(\alpha (y)\). To validate this calculation of \(\alpha (y)\), we run Monte Carlo simulations of the diffusion process
over a duration Δt, but with a time step of \(\Delta t/1000\) and with various initial conditions along the yaxis. For each initial condition, we evaluate the FP detection probability as the ratio between the number of Monte Carlo runs, where \(Y^{\mathrm{diff}}(\Delta t) > Y ^{\mathrm{diff}}(0) + \theta ^{*}\), and the total number of Monte Carlo runs (Fig. 6(B), blue dots), the result of which precisely fits with the calculated \(\alpha (y)\). Finally, we obtain the overall FP detection probability \(\varGamma _{A}\) from Eq. (6), which, in this pure diffusion test case, differs from \(\varGamma _{C}\) by only 0.06%.
Validation of the iterative scheme
Before applying the iterative procedure (Fig. 4) to real data, we first validate it against time series generated by numerically integrating Eq. (1) (using the Euler–Maruyama scheme). We consider two validation cases: in Case 1 the amplitude of the jumps is comparable to the diffusive fluctuations and jumps are sparser in time (i.e., occur at a lower rate) than in Case 2, where jumps are much larger than the background noise and their rate is double that of Case 1 (Fig. 7(A)). The specific functions and parameters used to generate and analyze these validation data are shown in Table 1, which can be summarized as follows: low rate, low amplitude, high noise for Case 1, and high rate, high amplitude, low noise for Case 2. Preliminary tests with a linear drift function showed a successful fit between the fitted SDE and the numerical data. We now opt for a more general and arbitrary shape where the drift function is nonlinear and nonmonotonic. The only restrictions are that it yields a single stable fixed point and that the resulting stochastic process is stationary. We thus restrict our analyses to drift functions that are mostly decreasing. Although the parameters and functions used for the simulations are known, they are not used in the inference procedure, only \(\{X(t)\}\) is. To assess the performance of the proposed method, we compare the estimated D̂, λ̂, F̂, and \(\hat{Q}_{B}\) with their true values.
We choose these two specific cases because they challenge both the sensitivity and the robustness of our method. In Case 1, the statistics of \(\{X(t)\}\) are not too different from those of a pure diffusion process, which makes it easier to estimate F, but difficult to extract \(Q_{B}\) amidst the FPs. In contrast, the jumps in Case 2 are well separated from the diffusive fluctuations, which allows for a more direct estimation of \(Q_{B}\). The presence of large jumps in this case, however, significantly alters \(\{X(t)\}\) and its PDF, making it harder to estimate F. In both cases, however, we find that the original SDEs can be precisely recovered by our method. For instance, simulating Eq. (1) with D̂, λ̂, F̂, and \(\hat{Q}_{B}\) of the last iteration not only produces time series that resemble the originals (Figs. 7(A) and 7(B)), but also yields an excellent fit between the reconstructed and original PDFs (Fig. 7(C))), with an \(O(10^{4})\) rootmeansquare error (normalized by the range of \(\{X(t)\}\)) and ACFs (Fig. 7(D)).
Inspecting the results from the last iteration, we indeed see that the correct drift function is recovered (Fig. 8(A)). This is done through the use of the socalled differential Chapman–Kolmogorov equation. This is then used to calculate the next \(Q_{A}\) and \(\varGamma _{A}\) (Fig. 8(B)), which allows the correct \(Q_{B}\) to be demixed from the measured \(Q_{C}\) (Fig. 8(C)). We also obtain low relative errors when comparing our estimates and the correct values of D and λ (Table 2). Note also that, although not shown here, we obtain the same fit quality between original and reconstructed when we apply our method to hybrid cases, where, for instance, the small jump amplitudes are paired with a higher rate instead of a lower one and viceversa.
Application to experimental data
We now proceed with an application of our inference method to electrophysiological data published in Ref. [44]. These data consist of in vitro, intracellular recordings of membrane voltage fluctuations in pyramidal neurons of the weakly electric fish Apteronotus leptorhynchus. These fish are endowed with an active sensing mechanism whereby they generate a high frequency (∼700 to 1000 Hz) oscillatory electric field around their body. This electric signal, along with any distortions caused by objects, preys, and conspecifics, is sensed by electroreceptors located on the fish’s body. This information is then sent to the hindbrain, where it reaches the first stage of electrosensory processing, called the electrosensory lateral line lobe (ELL). The recordings in Ref. [44] were taken from neurons of the ELL, specifically in the centrolateral and centromedial segments (CLS and CMS, respectively). In order to isolate the impact of voltagegated ion channels on membrane potential fluctuations, the ELL was treated with pharmacological agents (CNQX and APV) that block synaptic transmission onto the pyramidal cells. The resulting fluctuations are thus fully attributed to cellintrinsic sources, which we refer to as membrane noise. The main source of this type of noise is often assumed to be the stochastic opening and closing of ion channels, i.e., channel noise [45]. In this case, we cannot rule out other potential contributions, as nontrivial somadendrite interactions have been observed in these cells [46]. Note also that, by imposing different holding currents on the cells, Ref. [44] recorded ongoing membrane noise at various levels of hyperpolarization relative to spike threshold.
Of interest here is the presence of large, jumplike events, called blips, that abruptly depolarize the cells (Fig. 9(A), asterisks). Although Ref. [44] puts forth a hypothesis as to the functional role for these blips, the mechanism underlying their occurrence is unknown. This, along with the limited amount of data, hinders the development of any meaningful mechanistic model of this phenomenon. The jumpdiffusion inference approach developed here, however, is particularly well suited to circumvent this knowledge gap. Indeed, the resulting phenomenological models provide a useful tool for dynamically interpreting the available data without relying on poorly constrained biophysical mechanisms. For instance, we can address questions such as: Do certain parameters or functions of the model change as a function of the mean membrane potential?
Here we analyze recordings from two CMS and two CLS cells, each with five or six levels of hyperpolarization: from −25 to 0 mV below threshold, with 5 mV steps between levels. Using these relative levels with respect to spike threshold is required to compare cells that might have different thresholds (e.g., −67 to \(63~\mbox{mV}\) for CMS cells [44]). After removing experimental artifacts (see Sect. 3.4), we obtain a total of 23 traces, each lasting approximately 10 s. Applying our inference method to these traces yields a good fit between the resulting simulations and the original data (Fig. 9(A)): the PDFs differ only by \(O(10^{2})\) normalized rootmeansquare errors, and the power spectra fall within 95% confidence intervals of each other (Figs. 9(B) and 9(C)).
Further insight can be gained by comparing the estimated SDE parameters and functions D̂, λ̂, \(\hat{Q}_{B}\), and F̂ across all traces. We thus see, for instance, that CLS cells increase their jump rate (Fig. 10(A)), but not jump amplitudes (Fig. 10(B)), as they approach threshold. Note also that we could not measure any significant jump component for CMS cells. Instead, fluctuations in these cells are well described by pure diffusion. Furthermore, all cells show an increase in their diffusive noise intensity with depolarization, and this is more prominent in CMS cells (Fig. 10(C)). Lastly, to compare the different drift functions with a scalar measure, we apply a linear fit to F (estimated as in the previous section) in the vicinity (\({\pm}0.2~\mbox{mV}\)) of the stable fixed point. The slope parameter resulting from this fit can be interpreted as a measure of how wide or narrow the potential function is around the resting membrane voltage. Using this measure, we find no systematic intracell trend, but we do observe large differences between cell types: CLS cells have a wider potential function than CMS ones (Fig. 10(D)).
Data processing
For each cell, the raw data consist of a continuous, 60 to 70 second staircaselike trace, sampled at 20 kHz. Each step lasts ∼10 s and corresponds to a different holding current, which was applied such as to create 5 mV hyperpolarization from the previous level. In order to segment the recordings into different traces for each level, we first identify the transition times between different holding currents. This is done visually, as the transitions are unambiguous, and we omit ±0.5 seconds around those times. At the −5 and \(0~\mbox{mV}\) level, a few spikes (1 to 4) occur in the recordings. They are manually removed from the traces along with the ensuing refractory period.
A conspicuous aspect of the resulting traces is the presence of slow, large amplitude (1 Hz, \({\sim}1~\mbox{mV}\)) quasioscillations overlaid with the faster, stochastic fluctuations. The exact source of this slow component is unknown, but is possibly related to persistent sodium channels, which have been shown to populate the soma and proximal apical dendrites of ELL cells [44] and to produce slow perithreshold oscillations in entorhinal stellate neurons [47]. In any case, these slow oscillations are outside the scope of the method presented here. A moving average filter (0.05 s window size) is thus applied to remove this low frequency content from the signal.
Line noise is removed at all 60 Hz multiples with a notch filter, but the data are also contaminated with artifacts in other frequency bands, potentially from interference with other sources. This is most prominent in the 900–3000 Hz, but also carries around lower frequencies, e.g., 100, 270, and 550 Hz. To account for this artifact, we opt for the combination of a lowpass filter with a 900 Hz cutoff, and 20 Hz wide bandstop filters centered on the other problematic frequencies. Electrophysiological recordings can be dominated by measurement noise at high frequencies [48]. In our case this is seen as a flattening out of the PSD above 1000 Hz, so the 900 Hz cutoff used here does not lead to the loss of important biological signals.
The end result of this processing chain are time series that exhibit fluctuations typical of jumpdiffusion processes. We do observe, however, significant higherorder correlations on the smallest timescales (\(\mathrm{O}(\Delta t)\), \(\Delta t = 50~{\mu}\mbox{s}\)). To quantify these correlations, we use the notion of the Einstein–Markov timescale [5]. This is a measure of the timescale below which the Markov property no longer holds. Stochastic time series often show a departure from the Markov property on small timescales, possibly due to noise source correlation, the presence of an inertial component in the dynamics, or measurement noise [5]. Following [5] and [49], we estimate this Einstein–Markov timescale by finding the value of τ that minimizes
where \(x_{1}=x(t)\), \(x_{2}=x(t+\tau )\), \(x_{3}=x(t+2\tau )\), and \(\sigma ^{2}\) is the sum of the traces of the covariance matrices associated with the distributions in the numerator. For a proper Markov process, \(\chi ^{2}=0\), ∀τ. In this case, we find the minimum of \(\chi ^{2}\) at 1.2 ms, indicating that the Einstein–Markov timescale of the data is over one order of magnitude larger than the sampling interval Δt. This means that, on the time scale of individual observations, the data evolve with a history dependence that is incompatible with a Markovian description. If, however, we look at the data on a coarser time scale, e.g., the Markov–Einstein time scale of 1.2 ms, then the Markov property is approximately satisfied. In that case only can we hope to use Eq. (1) as a valid model for these data. To account for this problem, we resample the data at a 1.2 ms interval (∼830 Hz sampling rate) and obtain the final time series on which to apply our method (Fig. 9(A), top), with the time step equal to the Markov–Einstein time scale. Note that this situation is conceptually similar to how the Langevin model of diffusion (where the position of a particle is, by itself, not Markovian) reduces to the Einstein model (where the position is Markovian) only above a certain time scale [43].
Discussion
In this study, we develop an iterative procedure that recovers the parameters and functions of a jumpdiffusion SDE, based solely on a realization of the associated stochastic process. This approach is validated when the jumps are comparable in size to the diffusive fluctuations (Case 1), as well as when they are much larger than diffusive fluctuations (Case 2). We apply this method to membrane voltage fluctuations recorded in pyramidal neurons of electric fish. Our analysis reveals that these data can indeed be represented as jumpdiffusion processes. We find that pyramidal neurons increase their jump rate and noise intensity as they approach spike threshold, while their jump amplitudes and drift function remain unchanged.
Our approach relies on five main components: the use of the differential Chapman–Kolmogorov equation to estimate the drift function, the use of quadratic variation on jumpfree segments to estimate the diffusive noise intensity, the detection of jumps via thresholdcrossing of the increments, the modeling of detected jumps as a mixture of true jumps and FPs, and the calculation of FP statistics used to extract true jump statistics from the detected jump pool.
Although we estimate the drift function and the true jump amplitude distribution nonparametrically, we do limit our study to the case of additive diffusive noise, of constant jump rate, and of Poisson jumps. Relaxing the additive noise assumption would require an estimation scheme for the diffusion function \(D(y)\). For purely diffusive processes, this function can be obtained directly through the estimation of the second Kramers–Moyal coefficient, which is defined in terms of the second conditional moment of the increments. Evaluating this moment simply requires the knowledge of the conditional PDF across time steps. For a Poisson jumpdiffusion process, however, Ref. [31] has shown that the diffusion function can in fact be expressed in terms of the second conditional moment of the increments, the jump rate, and the second moment of the jump amplitudes. It should thus be possible to include the estimation of \(D(y)\) into the iterative portion of our method (Fig. 4). Indeed, estimates of the jump rate and of the amplitude distributions could be used at each pass to estimate the diffusion function. Furthermore, we have limited our analysis to noise intensities for which jump amplitudes are on average an order of magnitude or more larger than diffusive fluctuations. When diffusive fluctuations and jumps are of similar average magnitude, the number of detected FPs becomes too large and estimates of λ and \(Q_{A}\) become imprecise due to increased statistical fluctuations. A much finer temporal resolution would be necessary to address this particular case.
As for the assumption of constant jump rate, it should be possible to extract a rate function \(\lambda (y)\) as long as a ydependent version of Eq. (17) can be written. This would require a long enough data time series such as to produce an estimate of \(\varGamma _{C}(y)\). Relaxing the assumption of Poisson jumps, however, would be more difficult to do. The detection probability of true jumps, \(\varGamma _{B} = \lambda \Delta t\), would obviously need to be modified with the appropriate expression. Moreover, the specific form of the differential Chapman–Kolmogorov used here, Eq. (24), relies on the assumption of Poisson jumps (see the Appendix) and would thus need to be extended in a manner that depends on the precise nonPoissonian nature of the jump process. More specifically, the last two terms in Eq. (24), which are originally defined based on the transition rates of the Poisson jump process, would now be derived from the modified \(\varGamma _{B}\). Note that, for the special case of true jumps with zeromean amplitudes, the drift function can be estimated directly from the first conditional moment of the increments, without relying on Eq. (24) [31].
Membrane noise
The unusual characteristics of membrane noise observed in CLS neuron, initially reported in Ref. [44] and represented here as jumpdiffusion SDEs, might be implicated in novel ways in electrosensory processing. The analysis we perform here is a first step toward investigating this possibility computationally.
The positive impact of noise on information processing in neural systems is widely recognized [50]. Although channel noise was initially thought to be too weak compared to synaptic noise to influence a neuron’s output statistics [51, 52], it has since been shown to significantly impact neuronal reliability and action potential timing [47, 53,54,55,56,57]. Because it arises from the stochastic opening of voltagegated ion channels, channel noise has been successfully modeled by populations of Markov chains with voltagedependent transition rates [45]. In the quest for more computationally efficient models, however, various approximations have been used to model the collective behavior of these Markov chains as simple diffusion SDEs [57,58,59,60] (sometimes called the diffusion approximation for channel noise [61, 62], in reference to the diffusion approximation^{Footnote 3} for synaptic bombardment). One of these approaches, for instance, introduces a current noise term directly in the membrane equation [57, 63], effectively modeling the subthreshold voltage locally as an Ornstein–Uhlenbeck process. It is perhaps not surprising then that we obtain a good match between the observed CMS membrane noise (at various holding potential) and a pure diffusion SDE. In cases where membrane noise is more accurately described by multiplicative conductance noise [57], however, we might expect deviation from the simple SDE used here, similar as to how the diffusion approximation can misrepresent the subthreshold voltage distribution for certain types of synaptic drive [64].
We cannot completely exclude the possibility that small, hard to detect blips occur in CMS cells. Although our method proves capable of handling this type of situation (Sect. 3.2), the limited amount of available experimental data in this case precludes us from conclusively ruling out the existence of a jump component in models of CMS membrane noise. In addition, in both types of cells we find a positive correlation between the noise intensity and the holding potential (Fig. 10). This is consistent with how membrane potential variance has been observed to increase with depolarization in these same cells [44], as well as in rat neocortical pyramidal neurons [65]. Simple Markov models involving only Na^{+} and K^{+} channels are able to reproduce this correlation [66].
The presence of blips in CLS neurons suggests that ion channels coactivate to produce abrupt depolarizing currents. It was shown in [67] that sodium channels appear in clusters, or hot spots, in ELL pyramidal cells. This might allow local depolarization of the membrane to sufficiently couple channels within a cluster. Alternatively, perhaps channels are physically and functionally coupled through scaffolding protein complexes, as observed in [68]. Regardless of the exact coupling mechanism, the commonly used assumption of independence between channels [45, 57] is likely violated by these blips. The fact that we have successfully fitted a jumpdiffusion SDE to CLS membrane noise suggests that a diffusionlike approximation could be applied in this case as well. Such a deductive approach would, however, require tentative descriptions of local channel coupling to be included in the kinetic schemes.
Given the unknown biophysical mechanism underlying the blips, our fitted jumpdiffusion model is uniquely positioned to address questions related to their functional role. Future work will thus aim to incorporate our fitted jumpdiffusion model as a membrane noise term in a more complete model of CLS cells [69]. By accounting for the synaptic input associated with electrosensory input, the resulting model could investigate the possibility that blips assist or influence spiking, perhaps through a stochastic resonancelike phenomenon. Stochastic resonance, and more generally stochastic facilitation, has been shown to be mediated by channel noise in models of auditory brain stem neurons [51] and in modeled neuronal arrays [70], as well as to be mediated by synaptic noise in models of neocortical pyramidal neurons [71]. Since blips share a similar shape and amplitude as AMPAdriven excitatory postsynaptic potential, this begs the question of whether stochastic resonance is at play in the detection of weak electrosensory signals, such as small prey. This was indeed hypothesized, but not explicitly shown, in Ref. [44]: the voltagedependence of membrane noise makes it impossible to vary the noise level, the rate of occurrence, or the amplitude of blips independently of the membrane potential. The value of this hypothesis, however, could be assessed by performing a numerical experiment with our fitted SDE representation of blipladen membrane noise.
Lastly, although we apply our method here to recordings where synaptic input is completely blocked, it might be applicable to certain type of synaptic input patterns, such as correlated bombardment. We also hope to apply this method to fluctuations of the activesensing rate of pulsetype electric fish, where jumplike events also occur [72].
Notes
 1.
Alternatively, one could obtain Eq. (9) directly from Eq. (7) by invoking the law of total probability [74] for the set of mutually exclusive and exhaustive events \(\{\tau = i\Delta t, i \in \mathbb{N}^{*}\}\).
 2.
This equilibrium PDF exists for stable processes and is defined by \(P_{Y}(y) = \lim_{tt_{0} \rightarrow \infty } P_{YY_{0}}(y,ty_{0},t_{0})\) [43].
 3.
In this case we refer to how stochastic synaptic currents, modeled as shot noise and assumed to arrive at a high rate, can be approximated as a diffusion process.
Abbreviations
 SDE:

stochastic differential equation
 PDF:

probability density function
 ACF:

autocorrelation function
 PSD:

power spectral density
 FP:

false positive
 ELL:

electrosensory lateral line lobe
 CLS:

centrolateral segment
 CMS:

centromedial segment
References
 1.
Haken H. Information and selforganization: a macroscopic approach to complex systems. Berlin: Springer; 1988. OCLC: 649906343.
 2.
Holovatch Y, Kenna R, Thurner S. Complex systems: physics beyond physics. Eur J Phys. 2017;38(2):023002.
 3.
Nicolis G, RouvasNicolis C. Complex systems. Scholarpedia. 2007;2(11):1473.
 4.
Clemson PT, Stefanovska A. Discerning nonautonomous dynamics. Phys Rep. 2014;542(4):297–368.
 5.
Friedrich R, Peinke J, Sahimi M, Reza Rahimi Tabar M. Approaching complexity by stochastic methods: from biological systems to turbulence. Phys Rep. 2011;506(5):87–162.
 6.
Siegert S, Friedrich R, Peinke J. Analysis of data sets of stochastic systems. Phys Lett A. 1998;243(5):275–80.
 7.
Lamouroux D, Lehnertz K. Kernelbased regression of drift and diffusion coefficients of stochastic processes. Phys Lett A. 2009;373(39):3507–12.
 8.
van Mourik AM, Daffertshofer A, Beek PJ. Estimating Kramers–Moyal coefficients in short and nonstationary data sets. Phys Lett A. 2006;351(1):13–7.
 9.
Prusseit J, Stochastic LK. Qualifiers of epileptic brain dynamics. Phys Rev Lett. 2007;98(13):138103.
 10.
Deco G, Martí D, Ledberg A, Reig R, Sanchez Vives MV. Effective reduced diffusionmodels: a data driven approach to the analysis of neuronal dynamics. PLoS Comput Biol. 2009;5(12):e1000587.
 11.
Hindriks R, Jansen R, Bijma F. Unbiased estimation of Langevin dynamics from time series with application to hippocampal field potentials in vitro. Phys Rev E. 2011;84(2):021133.
 12.
Hindriks R, Bijma F. Dynamics underlying spontaneous human alpha oscillations: a datadriven approach. NeuroImage. 2011;57(2):440–51.
 13.
Petelczyc M, Zebrowski JJ, Baranowski R. Kramers–Moyal coefficients in the analysis and modeling of heart rate variability. Phys Rev E. 2009;80:031127.
 14.
Petelczyc M, Żebrowski JJ, OrłowskaBaranowska E. A fixed mass method for the Kramers–Moyal expansion—application to time series with outliers. Chaos. 2015;25(3):033115.
 15.
Ragwitz M, Kantz H. Indispensable finite time corrections for Fokker–Planck equations from time series data. Phys Rev Lett. 2001;87(25):254501.
 16.
Tutkun M, Mydlarski L. Markovian properties of passive scalar increments in gridgenerated turbulence. New J Phys. 2004;6(1):49.
 17.
Mousavi SM, Reihani SNS, Anvari G, Anvari M, Alinezhad HG, Tabar MRR. Stochastic analysis of time series for the spatial positions of particles trapped in optical tweezers. Sci Rep. 2017;7(1):4832.
 18.
Musila M, Generalized LP. Stein’s model for anatomically complex neurons. Biosystems. 1991;25(3):179–91.
 19.
Giraudo MT, Sacerdote L. Jumpdiffusion processes as models for neuronal activity. Biosystems. 1997;40(1):75–82.
 20.
Giraudo MT, Sacerdote L, Sirovich R. Effects of random jumps on a very simple neuronal diffusion model. Biosystems. 2002;67(1):75–83.
 21.
Sirovich R, Sacerdote L. Noise induced phenomena in jump diffusion models for single neuron spike activity. In: 2004 IEEE international joint conference on neural networks (IEEE cat. no. 04CH37541). vol. 4. 2004. p. 3025–8.
 22.
Sirovich R, Sacerdote L, Villa AEP. Cooperative behavior in a jump diffusion model for a simple network of spiking neurons. Math Biosci Eng. 2014;11(2):385–401.
 23.
Luczka J, Bartussek R, Hänggi P. Whitenoiseinduced transport in periodic structures. Europhys Lett. 1995;31(8):431.
 24.
Łuczka J, Czernik T, Hanggi P. Symmetric white noise can induce directed current in ratchets. Phys Rev E. 1997;56:3968–75.
 25.
Czernik T, Kula J, Łuczka J, Hanggi P. Thermal ratchets driven by Poissonian white shot noise. Phys Rev E. 1997;55:4057–66.
 26.
Kou SG. Jumpdiffusion models for asset pricing in financial engineering. In: Birge JR, Linetsky V, editors. Financial engineering. Handbooks in operations research and management science. vol. 15. Amsterdam: Elsevier; 2007. p. 73–116.
 27.
Tankov P, Voltchkova E. Jumpdiffusion models: a practitioner’s guide. Banque et Marchés. 2009.
 28.
Daly E, Porporato A. Probabilistic dynamics of some jumpdiffusion systems. Phys Rev E. 2006;73(2):026108.
 29.
Bandi FM, Nguyen TH. On the functional estimation of jumpdiffusion models. J Econom. 2003;116(1):293–328.
 30.
Johannes M. The statistical and economic role of jumps in continuoustime interest rate models. J Finance. 2004;59(1):227–60.
 31.
Anvari M, Tabar MRR, Peinke J, Lehnertz K. Disentangling the stochastic behavior of complex time series. Sci Rep. 2016;6:35435.
 32.
Lee SS, Hannig J. Detecting jumps from Lévy jump diffusion processes. J Financ Econ. 2010;96(2):271–90.
 33.
FigueroaLópez JE, Nisen J. Optimally thresholded realized power variations for Lévy jump diffusion models. Stoch Process Appl. 2013;123(7):2648–77.
 34.
Pawula RF. Approximation of the linear Boltzmann equation by the Fokker–Planck equation. Phys Rev. 1967;162:186–8.
 35.
Jun JJ, Longtin A, Maler L. Active sensing associated with spatial learning reveals memorybased attention in an electric fish. J Neurophysiol. 2016;115(5):2577–92.
 36.
Gillespie DT. Markov processes: an introduction for physical scientists. 1st ed. Boston: Academic Press; 1991.
 37.
Risken H, Frank T. The Fokker–Planck equation: methods of solution and applications. 2nd ed. Springer series in synergetics. Berlin: Springer; 1996.
 38.
Patankar SV. Numerical heat transfer and fluid flow. Series in computational methods in mechanics and thermal sciences. New York: Hemisphere; 1980. OCLC: 31743097.
 39.
Katsaggelos AK. Iterative image restoration algorithms. Opt Eng. 1989;28(7):287735.
 40.
AïtSahalia Y, Mancini L. Out of sample forecasts of quadratic variation. J Econom. 2008;147(1):17–33.
 41.
Hansen P, Lunde A. A realized variance for the whole day based on intermittent highfrequency data. J Financ Econ. 2005;3:525–54.
 42.
Gardiner CW. Stochastic methods: a handbook for the natural and social sciences. 4th ed. Springer series in synergetics. Berlin: Springer; 2009.
 43.
Gillespie DT, Simple SE. Brownian diffusion: an introduction to the standard theoretical models. 1st ed. Oxford: Oxford University Press; 2013.
 44.
Marcoux CM, Clarke SE, Nesse WH, Longtin A, Maler L. Balanced ionotropic receptor dynamics support signal estimation via voltagedependent membrane noise. J Neurophysiol. 2016;115(1):530–45.
 45.
White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends Neurosci. 2000;23(3):131–7.
 46.
Noonan L, Doiron B, Laing C, Longtin A, Turner RW. A dynamic dendritic refractory period regulates burst discharge in the electrosensory lobe of weakly electric fish. J Neurosci. 2003;23(4):1524–34.
 47.
Dorval AD, White JA. Channel noise is essential for perithreshold oscillations in entorhinal stellate neurons. J Neurosci. 2005;25(43):10025–8.
 48.
Diba K, Lester HA, Intrinsic KC. Noise in cultured hippocampal neurons: experiment and modeling. J Neurosci. 2004;24(43):9723–33.
 49.
Bahraminasab A, Ghasemi F, Stefanovska A, McClintock PVE, Friedrich R. Physics of brain dynamics: Fokker–Planck analysis reveals changes in EEG δ–θ interactions in anaesthesia. New J Phys. 2009;11(10):103051.
 50.
McDonnell MD, Ward LM. The benefits of noise in neural systems: bridging theory and experiment. Nat Rev Neurosci. 2011;12(7):415–26.
 51.
Schmerl BA, McDonnell MD. Channelnoiseinduced stochastic facilitation in an auditory brainstem neuron model. Phys Rev E. 2013;88(5):052722.
 52.
Burkitt AN. A review of the integrateandfire neuron model: I. Homogeneous synaptic input. Biol Cybern. 2006;95(1):1–19.
 53.
Faisal AA, Laughlin SB. Stochastic simulations on the reliability of action potential propagation in thin axons. PLoS Comput Biol. 2007;3(5):e79.
 54.
Schneidman E, Freedman B, Segev I. Ion channel stochasticity may be critical in determining the reliability and precision of spike timing. Neural Comput. 1998;10(7):1679–703.
 55.
Faisal AA, Selen LPJ, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci. 2008;9(4):292–303.
 56.
Chow C, Spontaneous WJA. Action potentials due to channel fluctuations. Biophys J. 1997;71:3013–21.
 57.
Goldwyn JH, SheaBrown E. The what and where of adding channel noise to the Hodgkin–Huxley equations. PLoS Comput Biol. 2011;7(11):e1002247.
 58.
Fox RF, Lu Y. Emergent collective behavior in large numbers of globally coupled independently stochastic ion channels. Phys Rev E. 1994;49(4):3421–31.
 59.
Fox RF. Stochastic versions of the Hodgkin–Huxley equations. Biophys J. 1997;72(5):2068–74.
 60.
Goldwyn JH, Imennov NS, Famulare M, SheaBrown E. Stochastic differential equation models for ion channel noise in Hodgkin–Huxley neurons. Phys Rev E. 2011; 83(4 Pt 1):041908.
 61.
Linaro D, Storace M, Giugliano M. Accurate and fast simulation of channel noise in conductancebased model neurons by diffusion approximation. PLoS Comput Biol. 2011;7(3):e1001102.
 62.
Orio P, Soudry D. Simple fast and accurate implementation of the diffusion approximation algorithm for stochastic ion channels with multiple states. PLoS ONE. 2012;7(5):e36670.
 63.
Rowat P. Interspike interval statistics in the stochastic Hodgkin–Huxley model: coexistence of gamma frequency bursts and highly irregular firing. Neural Comput. 2007;19(5):1215–50.
 64.
Richardson MJE, Gerstner W. Statistics of subthreshold neuronal voltage fluctuations due to conductancebased synaptic shot noise. Chaos. 2006;16(2):026106.
 65.
Jacobson GA, Diba K, YaronJakoubovitch A, Oz Y, Koch C, Segev I, et al. Subthreshold voltage noise of rat neocortical pyramidal neurones. J Physiol. 2005;564(Pt 1):145–60.
 66.
Steinmetz PN, Manwani A, Koch C, London M, Segev I. Subthreshold voltage noise due to channel fluctuations in active neuronal membranes. J Comput Neurosci. 2000;9(2):133–48.
 67.
Turner RW, Maler L, Deerinck T, Levinson SR, Ellisman MH. TTXsensitive dendritic sodium channels underlie oscillatory discharge in a vertebrate sensory neuron. J Neurosci. 1994;14(11):6453–71.
 68.
Zhang J, Carver CM, Choveau FS, Shapiro MS. Clustering and functional coupling of diverse ion channels and signaling proteins revealed by superresolution STORM microscopy in neurons. Neuron. 2016;92(2):461–78.
 69.
Bol K, Marsat G, HarveyGirard E, Longtin A, Maler L. Frequencytuned cerebellar channels and burstinduced LTD lead to the cancellation of redundant sensory inputs. J Neurosci. 2011;31(30):11028–38.
 70.
Ashida G, Kubo M. Suprathreshold stochastic resonance induced by ion channel fluctuation. Physica D. 2010;239(6):327–34.
 71.
Rudolph M, Destexhe A. Correlation detection and resonance in neural systems with distributed noise sources. Phys Rev Lett. 2001;86(16):3662–5.
 72.
Jun JJ, Longtin A, Maler L. Active sensing associated with spatial learning reveals memorybased attention in an electric fish. J Neurophysiol. 2016;115(5):2577–92.
 73.
Pathria RK, Beale PD. Statistical mechanics. 3rd ed. Amsterdam: Elsevier/Boston: Academic Press; 2011.
 74.
Papoulis A, Probability PSU. Random variables and stochastic processes with errata sheet. 4th ed. Boston: McGrawHill; 2002.
 75.
Hsu HP. Schaum’s outline of probability, random variables, and random processes. 3rd ed. New York: McGrawHill; 2014.
Acknowledgements
The authors would like to thank Érik HarveyGirard, Alexandre Payeur, Wilhelm Braun, and James Jun for helpful discussions, Curtis Marcoux and AnhTuan Trinh for providing the experimental data, and Leonard Maler for proofreading the manuscript.
Availability of data and materials
The dataset analysed during the current study are available from the corresponding author on request.
Funding
Funding for this research was granted by the Natural Sciences and Engineering Research Council of Canada and the Ontario Graduate Scholarship program.
Author information
Affiliations
Contributions
AM and AL conceptualized the initial direction of the project. AM derived the theory, performed the numerical simulations, analyzed the experimental data, and wrote the manuscript, under the supervision of AL. AM and AL proofread, edited, and approved the final version of the manuscript.
Corresponding author
Correspondence to Alexandre Melanson.
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
The particular form of the differential Chapman–Kolmogorov equation that we use here, Eq. (24), relies on the assumption of Poisson jumps. For completeness, and to make explicit the dependence of this equation on Poisson statistics, we provide the following short derivation:
Let \(X(t)\) be a continuoustime Markov process, possibly with discontinuous sample paths, in one dimension and with transition probability \(p(x,tx_{0},t_{0})\). The Chapman–Kolmogorov equation then has the following differential form [42]:
where
The first and second term of Eq. (29) are associated with the continuous part of \(X(t)\) and are often referred to as the advective and diffusive terms in the parlance of partial differential equations [38]. The last term in Eq. (29), on the other hand, represents how probability mass is transferred through discontinuities in \(X(t)\), as the function W is the instantaneous transition rate between different values of x. Indeed, sample path continuity is expressed as \(W(x\tilde{x},t) = 0\) [42], and in that case, Eq. (29) reduces to the Fokker–Planck equation. Note that, in the case where both the drift and diffusion functions vanish, Eq. (29) reduces to the socalled Master equation in statistical mechanics [36, 73].
In order to evaluate Eq. (30), we first introduce three mutually exclusive and collectively exhaustive events: \(E_{0}=\{\mbox{no jumps in }[t,t+\Delta t]\}\), \(E_{1}=\{\mbox{exactly one jump in }[t,t+ \Delta t]\}\), and \(E_{2}=\{\mbox{two or more jumps in }[t,t+\Delta t]\}\), such that \(E_{i} \cap E_{j} = \varnothing \), for \(i \neq j\), and \(\operatorname{Prob} ( \bigcup_{i} E_{i} ) =1\). From the law of total probability, we get
Assuming that discontinuities occur through a Poisson process with parameter λ, we can use the wellknown results [74, 75]:
where \(o(\Delta t)\) goes to zero faster than Δt:
Inserting the sum in Eq. (31) into the limit of Eq. (30) results in the sum of three limits, associated with \(E_{0}\), \(E_{1}\), and \(E_{2}\):
where the first term vanishes because of sample path continuity between jump events \(\lim_{\Delta t \rightarrow 0}p(x,t+\Delta t\tilde{x},t;E _{0})/\Delta t=0\), while the third one vanishes as per Eq. (33) \(\lim_{\Delta t \rightarrow 0}\operatorname{Prob}(E_{2})/\Delta t =0\). If jump amplitudes are distributed according to \(Q_{B}\), then this distribution can be defined as \(Q_{B} \equiv \lim_{\Delta t \rightarrow 0}p(x,t+ \Delta t\tilde{x},t;E_{1})\), which results in
When inserted into Eq. (29), this yields
If we assume a timehomogeneous drift function, additive noise, and a positive support for \(Q_{B}\), we recover Eq. (24).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Stochastic differential equations
 Jumpdiffusion processes
 Membrane noise
 Channel noise
 Electric fish
 Pyramidal neurons
 Fokker–Planck equation
 Chapman–Kolmogorov equation