Data-driven inference for stationary jump-diffusion processes with application to membrane voltage fluctuations in pyramidal neurons

The emergent activity of biological systems can often be represented as low-dimensional, Langevin-type stochastic differential equations. In certain systems, however, large and abrupt events occur and violate the assumptions of this approach. We address this situation here by providing a novel method that reconstructs a jump-diffusion stochastic process based solely on the statistics of the original data. Our method assumes that these data are stationary, that diffusive noise is additive, and that jumps are Poisson. We use threshold-crossing of the increments to detect jumps in the time series. This is followed by an iterative scheme that compensates for the presence of diffusive fluctuations that are falsely detected as jumps. Our approach is based on probabilistic calculations associated with these fluctuations and on the use of the Fokker–Planck and the differential Chapman–Kolmogorov equations. After some validation cases, we apply this method to recordings of membrane noise in pyramidal neurons of the electrosensory lateral line lobe of weakly electric fish. These recordings display large, jump-like depolarization events that occur at random times, the biophysics of which is unknown. We find that some pyramidal cells increase their jump rate and noise intensity as the membrane potential approaches spike threshold, while their drift function and jump amplitude distribution remain unchanged. As our method is fully data-driven, it provides a valuable means to further investigate the functional role of these jump-like events without relying on unconstrained biophysical models.


Introduction
Complex systems are ubiquitous in many areas of science, including biology, neuroscience, climatology, engineering, as well as in finance and social sciences [1,2]. The common feature uniting such vastly different systems is the nonlinear interaction between their numerous microscopic constituents. This collective activity leads to the emergence of macroscopic order that cannot be reduced to microscopic properties [3]. It is often these macroscopic variables that can be measured experimentally, allowing the emergent dynamics of the system to be captured by low-dimensional data sets. The search for a macroscopiclevel representation of the system thus relies on extracting dynamical information from observed time series [4].
For some systems, the high-dimensional, microscopic degrees of freedom can be well approximated by simple stochastic fluctuations. These fluctuations often participate as dynamical noise in the macroscopic evolution of the system. The low-dimensional representation of the system's dynamics can then be expressed as a stochastic dynamical system [5]. More specifically, the observed data are usually assumed to satisfy a Langevintype stochastic differential equation (SDE). A common approach is to obtain the drift and diffusion functions of this equation by estimating the first and second Kramers-Moyal coefficient [6,7]. The resulting model is completely data-driven and captures the core phenomenology of the original data without relying on knowledge or assumptions about the microscopic constituents of the observed system. This approach has been successfully applied in a variety of contexts, ranging from neuronal dynamics [8][9][10][11][12], heart rate variability [13,14], turbulence [15,16], calibration of optical tweezers [17], and others (see Ref. [5] for a review).
In all mentioned cases, however, the noise is assumed to be purely diffusive, i.e., random fluctuations with continuous sample paths. This description is incomplete if, in addition to diffusive fluctuations, large and abrupt events appear at random times throughout the time series. In this case, jump-diffusion stochastic processes provide a more appropriate framework to model these data. Jump-diffusion processes have been used in neuroscience as a model for the spatial [18][19][20][21] and temporal [22] organization of synaptic bombardment, in physics as a model for noise-driven transport in ratchet potentials [23][24][25], as well as in finance [26,27] and soil moisture dynamics [28]. The Langevin approach is likely to fail if the observed system exhibits jump-diffusion characteristics, such as skewed distributions and sudden large jumps. In such cases, and especially when the microscopic dynamics are unknown, extracting a phenomenological model from the experimental data would provide a valuable tool to probe the dynamics of the observed system, its interaction with other systems, and the interplay between diffusive and jump noise sources in shaping the observed behavior.
In this paper, we present a novel, data-driven inference method that fits a jump-diffusion SDE to experimental time series. By detecting jumps through threshold-crossing and by calculating the contribution of diffusive fluctuations that are falsely detected as jumps, we iteratively estimate the drift function, noise intensity, jump rate, and jump amplitude distribution. The result of this semi-parametric method is a jump-diffusion SDE that successfully fits the original data. Our method is applicable in cases where these data are stationary, with additive diffusive noise and Poisson jumps. Note that other studies have attempted to infer jump-diffusion dynamics from data, but they rely on assuming a parametric form of the jump amplitude distribution [29][30][31], or consider only Lévy processes [32,33].
We test our method with two validation cases, where realizations of known jumpdiffusion processes with different characteristics are used as validation data. In both cases, by using only the simulated time series, we precisely recover the correct parameters and functions used in the original simulations. We also compare the autocorrelation functions (ACF) of the validation data and of fitted SDE. We then apply our method to recordings of intrinsic membrane voltage fluctuations in pyramidal neurons of electric fish. These recordings contain sudden and unpredictable jump-like events that occur among more typical diffusive membrane noise. Although the exact biophysical origin of these fluctuations is unknown, we find that the recordings are well fitted by jump-diffusion processes. We evaluate goodness of fit quantitatively by comparing the observed and the estimated probability density functions (PDF) and power spectral densities (PSD). Interestingly, we find that some pyramidal cells increase their jump rate and noise intensity as the membrane potential approaches spike threshold, while their drift function and jump amplitude distribution remain unchanged.
In Sect. 2, we present the various steps involved in the inference procedure, including the detection scheme, the calculation relating to false positive, and the iterative approach. In Sect. 3, we validate this procedure against one pure diffusion and two jump-diffusion test cases, and then we apply it to neurophysiological recordings of membrane noise. This is followed by a discussion on the possible generalizations of our method and on future work with the experimental data (Sect. 4).

Definitions and overview
Let {X(t)} represent data in the form of a stationary time series. In what follows, {X(t)} is obtained either from experimental observations or from numerical simulation. In either case, the situation of interest here is when {X(t)} exhibits both diffusive fluctuations and abrupt events (henceforth called jumps). From {X(t)}, our goal is then to fit a jumpdiffusion SDE of the form where F is the drift function, D is the noise intensity, and W (t) is a Wiener process (i.e., Brownian motion). Here J(t) is a compound Poisson process representing the jumps where N λ (t) is a Poisson point process with rate λ, and the B i 's are the independent and identically distributed jump amplitudes drawn from a distribution Q B . For a small enough sampling interval, or time step t, jumps will occur with probability Γ B ≡ λ t.
Although the method developed herein is applicable to a wide array of experimental data types, we do, however, impose certain conditions on the underlying dynamical process. Notably, we limit our analysis to systems where the dynamics are stationary (in the strict sense), the diffusion noise is additive, the jumps have positive amplitudes (B i > 0), and where the Poisson rate λ is constant in time and small enough so that Γ B 1. Furthermore, we assume here that F is continuous and that a single stable fixed point arises from the deterministic part of the dynamics, but our method could be generalized to multistable systems. Except for this restriction, we assume no particular shape for the drift function as long as it generates a stationary process. Finally, we assume that, on average, jump amplitudes are greater than the typical magnitude of diffusive increments: E[B i ] > (2D t) 1 2 by one order of magnitude of more.
In terms of the time series itself, we assume that the data were sampled at a highfrequency, such that t can be assumed to be small with respect to the total duration of the time series. Note that the value of t is set from the experiments that produced the data, and so is not a variable we can control. For experimental data, however, it is possible that the Markov property breaks down on the timescale of individual observations [5], but we assume that, in that case, the time series can be downsampled to a timescale where the Markov property then holds.
Furthermore, in the cases considered below, jumps appear unambiguously in the data, and so a jump-diffusion approach is warranted. In situations where this is not the case, the presence of jumps can be assessed from the fourth-order Kramers-Moyal coefficient, which is non-vanishing for processes with discontinuities [31,34]. Lastly, note that we make no assumptions regarding the structure of the ACF of {X(t)}; we only assume stationarity in these data. In fact, we compare the ACFs (and PSDs) of time series generated by the fitted SDE with those of the original data as a means of validating the inference procedure.
Below we develop a data-driven inference procedure that successfully generates the estimatesD,λ,F, andQ B , where the unknown functions F and Q B are estimated nonparametrically. This inference procedure results in a fitted stochastic process Y (t) that is an adequate model of the original data. In this sense, we implicitly assume that the data {X(t)} are sampled from a realization of Y (t). In our calculations, we thus associate the first-order equilibrium PDF of Y (t), P Y , to the empirical PDF of {X(t)} (obtained by kernel density estimation) P X .
Our approach is predicated on the detection of jumps in the data via the application of a threshold θ * > 0 on the increments X(t + t) ≡ X(t + t) -X(t). This procedure creates a pool of detected jumps with various amplitudes. Let Q C be the empirical PDF estimated from these measured jump amplitudes. Also, let n be the total number of increments in the time series, and m be the number of increments whose value is greater than θ * . We define the (overall) jump detection probability as Γ C ≡ Prob{detecting an increment larger than θ * across an interval t}, which we estimate from the data as Γ C = m n . An inherent challenge with this threshold-crossing approach is that, in addition to the true jumps generated by the compound Poisson process, we also unavoidably detect large diffusive fluctuations that are falsely identified as jumps, henceforth called false positives (FP; Fig. 1, top). A direct estimation of the true jump rate λ and of the true jump amplitude distribution Q B is thus impossible because the detected jump pool consists of a mixture of true jumps and false positives. Our main contribution, and the central component of our inference procedure, is the calculation of FP-related statistics, namely the FP detection probability Γ A ≡ Prob{detecting a diffusive increment larger than θ * across a given time step t} and the distribution from which FP amplitudes are drawn Q A . Once Γ A and Q A are calculated, we then extract λ (or, equivalently, Γ B ) and Q B from Γ C and Q C . Note that the subscripts "A", "B", and "C" will hereafter refer to FPs, true jumps, and both combined, respectively. More precisely, we measure quantities in "C" from the detected jump pool, we calculate FP statistics in "A", and we seek the true jump statistics in "B".

Choice of threshold
To ensure that a minimal number of true jumps are missed during the jump detection procedure, the threshold θ * should ideally be set as low as possible. If set too low, however, the number of FPs becomes so large that the statistics of true jumps, i.e., λ and Q B , cannot  be extracted from the statistical fluctuations of Q C and Γ C . We thus aim for an intermediate value of θ * that captures most true jumps while allowing a manageable number of FPs. This is done by relying on our assumption of positive jump amplitudes and, more precisely, by exploiting the asymmetry between positive and negative increment statistics. Note that the threshold depends implicitly on the time step of the original time series: a smaller t means that diffusive fluctuations are smaller, and thus a smaller value of θ * can be used.
Let { X + } and { X -} be the sets of positive and negative increments of {X(t)}, respectively ( Fig. 2(A)). In addition, let M + (θ ) = { X + : X + > θ } and M -(θ ) = {-X -: -X -> θ } be the reduced sets truncated by θ , where θ spans the common range of { X + } and {-X -}. We use the difference between the sample mean of these sets M + -Mas a function of θ to quantify the relative importance of true jumps with respect to diffusive fluctuations, over different increment sizes. The value of θ for which M + -Mis a maximum corresponds to the greatest separability between positive and negative increments. We find, however, that using the inflection point, i.e., where the second derivative becomes 0, located to the left of this maximum (Fig. 2(B), asterisks) is a better choice of threshold. This slightly lower value retains a greater range of true jumps, which is desirable, while avoiding the inclusion of an overwhelmingly large number of FPs. Choosing this inflection point rather than the maximum impacts primarily the estimation of λ, since it relies on the proper detection of true jumps in the time series. A slightly higher value of θ * will introduce a bias inλ. For instance, in the first jump-diffusion validation case that we consider in Sect. 3.2, choosing the maximum as the threshold roughly yields a 1% increase in the error onλ compared to when the inflection point is used. Our approach for setting θ * is thus motivated by the fact that it is advantageous to choose a value as small as possible for θ * , and the inflection point in the curve of M + -Mprovides a reliable way to achieve this.

Jump detection
Here we describe how the threshold is applied to the increments in order to generate the detected jump pool. We apply a detection scheme tailored specifically to handle two aspects that we observe in experimental data with jumps, and it is inspired by the method used in [35]. Firstly, if the data are resolved on a fine enough time scale, jumps may last longer than a single sampling interval. Secondly, jumps need not be followed immediately by negative increments. In data, and in some simulations as well, the diffusive increments following a jump can still be positive, but we seek a method for identifying when the jump actually ends (Fig. 1, inset). These two considerations shape the method used to calculate the FP amplitude distribution in Sect. 2.4.2.
When a given increment is larger than the threshold, a jump onset time T on is registered, and if the next increment is below threshold, the associated offset time T off is registered (even if this increment is positive). This defines a jump of duration T off -T on = t, where t is the sampling interval of the data. We henceforth refer to this type of jump as a singlet, as it spans the duration of a single time step. In contrast, if two or more successive increments are above threshold, then the jump is of duration 2 t or more, which we refer to as a doublet, triplet, and so forth. In other words, the jump offset time is only registered at the end of the sequence of above-threshold increments. With the onset and offset times identified, a jump amplitude is defined as the difference between X(T off ) and X(T on ).

FP and true jump statistics
Here we present the calculations of the FP detection probability Γ A and of the FP amplitude distribution Q A . We then show how these quantities are used to extract the true jump rate λ and the true jump amplitude distribution Q B from the detected jump probability Γ C and the detected jump amplitude distribution Q C . For the calculations in this subsection, we assume that the drift function F and noise intensity D are known. In the next subsection, we show how these calculations can be incorporated into an iterative scheme that allows the simultaneous estimation of all unknowns, including F and D. Furthermore, note that the FP-related calculations involve only the diffusive part of Eq. (1) since, by definition, FPs occur during the purely diffusive segments between true jumps. The calculations in Sects. 2.4.1 and 2.4.2 thus pertain only to diffusive increments Y diff (t). Finally, note that we perform validation tests of the calculations of Γ A and Q A in Sect. 3.1.

FP detection probability
As mentioned in Sect. 2.1, sampling a jump-diffusion process such as Eq. (1) at finite intervals and applying a threshold on the observed increments leads to the detection of FPs, i.e., diffusive (rather than true jump) increments larger than the threshold. Importantly, these FPs occur with a probability that depends on the value of the process at the start of the interval. Let this conditional detection probability be defined as As α does not depend explicitly on time, this definition relies on our assumption that Y (t) is stationary. The y-dependence arises from the drift function F. Indeed, if the drift function is positive (respectively, negative) at a given time, it biases diffusive fluctuations toward (away from) the threshold. This translates into an FP detection probability that assumes higher values when F(y) > 0 than when F(y) < 0. We now turn to the explicit calculation of α(y).
conditioned on the value of the process at the start of the interval, and where ξ assumes the possible values of the increments. Note that, because the time step remains constant, it is always implied that the increments are defined across an interval t. Given that we assume t to be sufficiently small, we approximate Ξ Y |Y as the short-time propagator of the Fokker-Planck equation [36,37] associated with the diffusive part of jump-diffusion process (recall that what concerns us here are the purely diffusive segments between the true jumps of Y (t)). We thus that is, a Gaussian distribution with mean F(y) t and variance 2D t.
For the test cases presented in Sect. 3 (with t = 10 -4 s), we have validated this approximation by comparing it with numerical solutions of the associated Fokker-Planck equation solved at a finer temporal resolution ( t/1000) across the time step t. The numerical solutions were indeed well fitted with the approximation in Eq. (4) (not shown). Numerical integration was performed with a custom partial differential equation solver that implements a finite volume discretization along with the fully implicit Euler scheme. The advective term was treated with the upwind scheme, and a linear interpolation profile for the spatial derivative was applied to the diffusive term. The resulting algebraic equation was solved with the tridiagonal matrix algorithm [38].
Once the conditional PDF of the increments is evaluated with Eq. (4), we calculate the conditional FP detection probability, given that the process starts at y, as follows: that is, the probability of observing an increment larger than θ * starting at y. Finally, the unconditional FP detection probability is calculated based on the empirical PDF of {X(t)}, P X : We validate these calculations in Sect. 3.1.

FP amplitude distribution
We now proceed with the calculation of Q A , i.e., the distribution from which FP amplitudes are drawn. First, recall that our detection scheme allows for jumps of different durations (Sect. 2.3). As such, the detection of an FP implies either a succession of above-threshold increments (e.g., Fig. 3(A)) or, at least, a single above-threshold increment. Let T FP on denote the FP onset time, that is, the time at the start of the first above-threshold increment, and let Y 0 ≡ Y (T FP on ) be referred to as the starting value of the FP. Moreover, let τ denote the FP duration, an integer multiple of t, such that τ = t corresponds to an FP singlet, τ = 2 t to an FP doublet, and so forth.
In order to calculate Q A , let us first identify the factors that influence FP amplitudes. Firstly, it must be noted that the FP amplitudes will exhibit a similar y-dependence as that discussed in the preceding section. Indeed, an increment starting at Y 0 ≡ Y (T FP on ) = y will tend to be larger when F(y) > 0 than when F(y) < 0. Secondly, the amplitude of an FP will also depend on its duration, τ . For instance, the three increments of a triplet FP will summate and tend to have a larger amplitude than that of a singlet. The FP amplitudes A will thus covary with Y 0 and with τ , but note that τ also depends on Y 0 . Indeed, longer FPs will tend to occur where the drift function is more positive, and vice versa. To account for these dependencies, let us define the trivariate random variable {A, τ , Y 0 }, distributed according to its joint PDF P A,τ ,Y 0 (a, i t, y), where we explicitly write τ as an integer multiple of t. What we seek then is the marginal: The situation in A is addressed by calculating ρ i , the PDF of Y i conditioned on Y 0 . (C) From ρ i-1 , we then calculate Ξ i , the PDF of Y i conditioned on Y 0 , which is used to evaluate Z i (shaded area), the probability that the ith increment is above threshold, given Y 0 . Note that, although they look similar, the Ξ i 's are slightly different from each other where the sum extends over all possible FP durations, and where a > 0 represents all possible amplitudes. From the definition of conditional PDFs, we can expand the joint PDF as follows: where P τ |Y 0 (i t|y) = Prob(detecting an FP of duration i t, given the starting value y) is the conditional probability mass function of the FP duration τ . We can thus write Eq. (7) as a The sum in the large parentheses is a function of y, and Eq. (9) is merely calculating its average with respect to the starting value Y 0 . This sum can further be interpreted as a socalled mixture distribution: consider a collection of random variables, one of which is chosen according to a certain probability (its mixture weight) and is then realized according to its own PDF (its mixture component). The outcome of this experiment is itself a random variable whose PDF is called a mixture distribution and is expressed as a sum over the PDFs of the random variables in the collection, weighted by their respective probabilities. In our case, for a fixed value of Y 0 , an FP duration is drawn according to a countable set of mixture weights P τ |Y 0 , and an FP amplitude is then realized according to the associated mixture component P A|τ ,Y 0 . In practice, the sum will be truncated after the first few terms because the subsequent mixture weights become negligible.
From Eq. (9), we see that in order to arrive at the desired Q A , the functions P A|τ ,Y 0 , P τ |Y 0 , and P Y 0 must first be calculated. Let us first consider the latter. Because Y 0 represents, by definition, the value of Y (t) at the start of an above-threshold increment, we can express its PDF in terms of the joint PDF of Y diff (t + t) and Y (t): where K is a normalization constant and where, in the last line, we have replaced P Y by the empirical PDF of {X(t)}. Note that we integrate with θ * as a lower bound in order to enforce that Y 0 is associated with the onset of an above-threshold increment. Let us now consider the calculation of P A|τ ,Y 0 .
In what follows, we simplify the notation by labeling time with the index i, such that i = 0 represents the time T FP on , i = 1 the time T FP on + t, i = 2 the time T FP on + 2 t, and so forth. With this notation, Y i ≡ Y (T FP on + i t) denotes the ith point following the FP onset time, and Y diff i ≡ Y i -Y (i-1) the ith diffusive increment. Furthermore, our focus here is on FPs of duration, say i t, which corresponds to a sequence of i successive above-threshold increments. As such, the forthcoming calculations involve PDFs that are implicitly conditioned on the event { Y diff n > θ * , ∀n ≤ i}. For an FP of duration τ = i t starting at Y 0 , we define its amplitude as A = Y i -Y 0 , and we seek the conditional PDF P A|τ ,Y 0 . For this purpose, let The ρ i 's, for i > 1, are evaluated sequentially based on the fact that To enforce the condition of above-threshold increments, { Y diff n > θ * , ∀n ≤ i}, we evaluate P Y i |Y i-1 based on Eq. (4), but we truncate the distribution below ξ = θ * : where K is a normalization constant. With Eq. (13) and (12), we finalize the calculation of P A|τ ,Y 0 in Eq. (11). In Fig. 3(B), we see the representation of the ρ i for an FP triplet. From ρ i , we can also calculate the PDF of Y diff i , conditioned on Y 0 (this will be useful in the calculation of P τ |Y 0 ). Let this PDF be defined as Ξ i (ξ ) ≡ P Y i |Y 0 (ξ |y 0 ). We calculate it as a marginal over Y i-1 : where the dependence of Ξ Y i |Y i-1 on Y 0 disappears because of the Markov property. In Fig. 3(C), we see the representation of the Ξ i 's for an FP triplet. We now turn to the calculation of P τ |Y 0 , the probability of observing an FP of duration τ , conditioned on the starting value Y 0 . We are interested in the conditional probability of the event {τ = i t}, where i is an integer. This event is equivalent to the intersection of the events In other words, we obtain an FP of duration i t when the first i increments are above threshold, but the (i + 1)th increment is below threshold.
By successively applying the definition of conditional probability, we can expand P τ |Y 0 as follows: represent the probability that the ith increment is above threshold, given that the i -1 previous increments were also above threshold, and given the starting value y 0 . These Z i 's can be calculated from the Ξ i 's of Eq. (14) as ( Fig. 3(B), shaded area): We now arrive at the desired probability mass function: where we have used the fact that 1 -Z i+1 is equal to the probability that the (i + 1)th increment is below threshold, and where we have defined Z 1 (y 0 ) ≡ Prob(E 1 |y 0 ) = α(y 0 ), i.e., the probability that the first increment after Y 0 is above threshold. Once Eq. (10), (11), and (16) are evaluated, we apply Eq. (9) to obtain the desired Q A . Using this approach, we obtain an excellent agreement between theory and simulations, as reported in Sect. 3.1.

True jump rate
Our estimate of the true jump rate λ relies on the knowledge of the overall jump detection probability Γ C and on the FP detection probability Γ A (both defined in Sect. 2.1). Recall that we calculate Γ A from Eq. (6), while we estimate Γ C directly from the data as m/n, where m is the number of time steps with X(t) > θ * (either from a true jump or an FP) and n is the total number of time steps in the data time series. On the other hand, from the definition of Γ C we can write: Γ C ≡ Prob detecting an increment larger than θ * across an interval t = Prob (detecting an FP across t) ∪ (detecting a true jump across t) where Γ B = λ t is the probability of observing a true jump in an interval t. This is merely a statement of the addition law of probability, which would read: the probability of detecting a jump in an interval t is the sum of the probability of observing a true jump, plus that of observing an FP, minus the probability of observing both at the same time, where we use the fact that FPs and true jumps are independent events. For the test cases described in Sect. 3.2, isolating λ in Eq. (17) is accurate up to an error of 0.02%.

True jump amplitude distribution
As in the previous subsection, we obtain an estimate for the true jump amplitude distribution Q B based on the empirical PDF of jump amplitudes measured from the time series Q C and on the calculated FP amplitude distribution Q A . Because detected jumps are a mixture of true jumps and FPs, we can write, a priori, where W A = Γ A Γ C is the probability that a detected jump is an FP, and W B = Γ B Γ C that it is a true jump. The subtlety here is that, contrary to FPs, true jumps are never detected on their own, as they always summate with a diffusive fluctuation. In other words, we never observe the B i 's directly, but rather the B i 's plus a diffusive increment. Over a short enough time step, diffusive increments are Gaussian variables and are approximately independent of each other. For the purpose of calculating Q B , we will thus assume these increments are Gaussian with mean zero and variance 2D t. Properly accounting for the y-dependence of the mean would be more precise, but would require Q C to be broken down into a family of distributions parameterized by y, which would require a very large number of detected jumps in the data.
LetΞ represent a Gaussian distribution with zero mean and variance 2D t. The Q B in Eq. (18) should thus be replaced by the convolutionΞ * Q B . Furthermore, W A must in fact be reduced by a factor (1 -Γ B ) to account for the probability that FPs can occur during the same interval as a true jump. This leads to From this equation, we isolate the convolution term and apply the basic deconvolution algorithm [39] to extract Q B . Let f be a measured, convolved signal, where the convolution kernel h is known. We seek the intact signal g, such that f = (h * g). We compute an estimate of g at each iteration as g k+1 = g k + [f -(h * g k )], with g 0 = f . The algorithm converges once the correct signal is reached, since the residual between f and (h * g k ) then becomes zero.

Iterative procedure, noise intensity, and drift function
We now turn to the problem of the simultaneous data-driven estimation of all the unknowns in Eq. (1). To this end, we incorporate the calculations of Sect. 2.4 in the iterative scheme depicted in Fig. 4, which consists of three main branches. The first two initial branches, I and II, are independent and are performed only once; this is followed by branch III where the iterations take place. In branch I, the threshold is set (Sect. 2.2) and then applied to the time series to yield the detected jump pool, from which Γ C and Q C are obtained (Sect. 2.3). The noise intensity is estimated in branch II, along with an initial Figure 4 Overview of the flow of our iterative procedure. In both validation cases, a satisfying estimate is obtained after about 10 iterations. The threshold and noise intensity are estimated directly from the data in I and II, while the true jump rate and amplitude distribution, as well as the drift function, appear in the iterative phase III. Note that the true jumps statistics are not yet established in II, and this is why we resort to the Fokker-Planck equation as a means to obtain a preliminary guess of the drift functionF 1 . A more refined estimate of F is later obtained at the end of branch III guess for the drift functionF 1 , which are both used in the first iteration of branch III to calculateΓ B andλ (Sect. 2.4). The last step in branch III usesD,Γ B , andλ to estimate the drift functionF, which is fed back to the first step of branch III in order to iteratively refine the estimation procedure.

Noise intensity
As depicted in Fig. 4, the estimation of D does not rely on the value chosen for θ * . It does, however, still use the notion of applying a threshold on the increments. Indeed, calculatinĝ D relies on partitioning {X(t)} into mostly jump-free segments, the length and number of which is sensitive to the value of the threshold used for detection: the lower the threshold, the more jumps are detected (some of which are FPs) and the shorter these partitions are, which, as explained below, can skew the estimate of D. A high threshold, on the other hand, leaves a significant number of true jumps in those segments. The goal here is thus to vary the threshold θ in order to obtain the optimal estimate of D.
In the limit of an infinitesimally small sampling interval, t → 0, the quadratic variation [Y diff (t)] of a pure diffusion process converges to the so-called integrated variation, which, for additive and time-independent noise, gives [40,41] where T = (n -1) t is the total duration for the n samples of {X(t)}. We can, therefore, estimate D via the sample quadratic variation, also known as realized variance, RV (t) [40,41]:D For instance, for a test diffusive process with F(y) = -0.2y and D = 0.15 and sampled at t = 0.01 s (N = 10 6 ), Eq. (21) estimates D with an error of 0.002%. In contrast, applying Eq. (21) to a realization of a jump-diffusion process returns an overestimatedD, as expected due to large, non-diffusive, positive increments that populate {X(t)}. Also note that if the threshold is set low enough to detect the smallest true jumps, in general, it can also remove the largest diffusive increments, which are required to properly apply Eq. (21).
Simply removing the detected jumps from the sum in Eq. (21) would, therefore, yield an underestimatedD.
To circumvent this problem, we consider only the negative increments of {X(t)} in the calculation ofD, as they will remain essentially unaffected by the presence of positive jumps, with the exception of a short transient following the jump offset. Indeed, following each jump, we expect to see a brief period where the process is out of equilibrium. And since the jumps have positive amplitudes, negative increment statistics are biased toward negative values during this transient (e.g., Fig. 1). The calculation ofD is thus based on applying Eq. (21) to jump-free segments of {X(t)}, but only including negative increments, and neglecting the initial transient at the start of each segment (the duration of which is determined below). This is repeated for various values of θ . The successful estimation of D based only on negative increments relies on our assumption that t is small, for in this case increments are approximately independent and distributed as N (0, 2D t). For larger t, the increment PDF can become asymmetric, meaning that the statistics of negative increments differ from those of positive ones, which would cause errors in our estimation of D.
Let T off and T on denote the jump offset and onset times, respectively. Note that the values of these times and the number of detected jumps all depend on the specific value of the threshold. Then the ith segment is defined by {S(t)} i = {X(t) : T off (i) < t < T on (i + 1)} and is of duration T i = T on (i + 1) -T off (i), and let { S(t)} i be its n i increments. Out of these n i increments, we keep only the ni that are negative and that occur after the transient of approximate duration Φ. We are thus left with the following subset of increments from each segment: For each segment, we obtain an estimateD i as follows: where Ti = ni t is the effective duration of the combined ni negative increments. We then calculateD as an average of theD i 's, weighted by T i /T.
More precisely, here are the steps taken in order to arrive atD: • Starting from the largest value of { X(t)}, lower the threshold until the largest 5% of jumps are detected, which are the ones with the most prominent transient. • Let {S * (t)} i be the segments that follow these jumps (Fig. 5(A)). Average across them for each time step, creating a jump-triggered average trace (Fig. 5(A)), black line). • Identify Φ as the approximate moment when the jump-triggered average stabilizes, quantified here as when its derivative is less than 0.05 (results do not depend strongly on this particular value: changing Φ by one order of magnitude on either side of the value used here yields estimates of D that differ by less than 0.2%). This gives us an estimate for the maximum time scale required for post-jump equilibrium. • With Φ determined, and for each value of the threshold, extract the {S(t)} i 's ( Fig. 5(B)) and apply Eq. (23) to their negative increments for which t > Φ (neglecting segments for which T i < Φ). This scheme results in an estimateD for each value of the threshold, and the lowest value is chosen as the optimal estimate ( Fig. 5(C)). This is because this approach overestimates D on both ends of the range of threshold values, but for different reasons. For low θ , many more jumps are detected, and this makes the {S(t)} i 's shorter, which means that the asso-ciatedD is more prone to be biased by the residual of the transient. On the other hand, a very high threshold leaves significant jumps in the {S(t)} i 's, which biases the statistics of negative increments. The optimal balance is reached somewhere in-between, where the segments are large enough so that the initial transient is negligible, and where the jumps that are inevitably left in the segments do not significantly alter the statistics of negative increments. The heuristic choice of an intermediate value, namely one that corresponds to the minimalD estimate, gives excellent results in both validation cases (less than 0.1% error, see Table 2).

Drift function
Our estimation of the drift function F relies on the differential Chapman-Kolmogorov equation [42], which describes the evolution of the transition probability of a stochastic process where jumps occur alongside diffusive fluctuations. Let Y (t) be a jump-diffusion process with transition probability P Y |Y 0 . For the case of positive Poisson jumps and additive diffusive noise, the differential Chapman-Kolmogorov equation reduces to (see the Appendix) If Y (t) is assumed to have reached its equilibrium state, then the left-hand side vanishes, and in the right-hand side we can replace the transition probability with the first-order equilibrium PDF, P Y , b which we assume to be equal to the empirical PDF, P X , of the mea- Note, however, thatF is required in the first step of branch III of the iterative procedure (Fig. 4), since the FP-related statistics, Γ A and Q A , are calculated based on the drift function. A preliminary estimateF 1 of the drift function is thus required. This particular estimate, which is needed only once throughout the inference procedure, is obtained by letting λ = 0 in Eq. (24), such that it becomes the Fokker-Planck equation associated with the diffusive part of the stochastic process. The stationary solution of this Fokker-Planck equation can be used to establish a relation between the noise intensity, the drift function, and P Y [37,43]: where K is a normalization constant and where, again, we assume that P Y = P X . This first preliminary estimate is necessarily flatter than the true F, as the presence of jumps makes P X wider than it would be if there were no jumps. Successive iterations gradually rectify this by incorporating estimates of λ and Q B in Eq. (24).

Results
Here we present three applications of the method developed above. First, we validate the calculation of Γ A and Q A for the case of a purely diffusive process. Then we apply the full iterative scheme to two simulated jump-diffusion processes with different characteristics. Finally, we apply our inference method to electrophysiological recordings in pyramidal cells of electric fish.

Validation of the FP statistics calculations
To confirm that the calculations of Q A and Γ A are accurate, we start with a simple test case where we consider a time series {X diff (t)} obtained from a simulated pure diffusion process. As there are no jumps here, the distribution of increments does not possess the necessary asymmetry to properly identify a threshold. For this test case only, we thus opt for a specific value, θ * = 0.1, that showcases the ability of our method to handle FPs of various durations. The results presented here, however, remain valid for a range of values of θ * . For the parameters used in this pure diffusion validation case (Table 1), this range extends from 0.025 up to 0.2. The upper limit is set by the fact that, beyond it, too few FPs are detected, which precludes any statistical calculations from being achieved. The lower limit, on the other hand, arises because too many FPs are detected, such that, for instance, they occur every other time step. In such a case, t is too large and the estimation of λ becomes imprecise due to the statistical fluctuations in the number of detected FPs. Applying the threshold in this case leads to a detected jump pool comprised entirely of FPs. The goal now is to compare the measured Q C and Γ C with the calculated Q A and Γ A . If we obtain that Γ C ≈ Γ A and that Q C ≈ Q A , then we will effectively have shown that the true jump rate is zero, λ = 0, and that our method correctly calculates the FP amplitude distribution. In this pure diffusion test case, these calculations rely on the knowledge of the correct noise intensity D and the correct drift function F, but this will not be the case in subsequent sections.
With the particular values of D and F used here to simulate {X diff (t)} (Table 1), we find that FPs are either singlets, doublets, or triplets, which contribute differently to the measured amplitude distribution Q C . Indeed, the fact that longer FPs tend to have larger amplitudes and that FP durations are always multiples of t creates distinct modes in the  (26) are the individual distributions associated with FPs of duration i t (Fig. 6(A), yellow curves). These distributions are then summed to obtain Q A , which is a precise match with Q C for this purely diffusive example (Fig. 6(A), black curve). Furthermore, by applying Eq. (5) we calculate the y-dependent detection probability, α(y) (Fig. 6(B), yellow curve). As expected, this function depends non-trivially on y and reflects the nonlinearity of the specific drift function used in this example. If multiplicative noise had been used, the noise intensity D(y) would also have influenced the shape of α(y). To validate this calculation of α(y), we run Monte Carlo simulations of the diffusion process over a duration t, but with a time step of t/1000 and with various initial conditions along the y-axis. For each initial condition, we evaluate the FP detection probability as the ratio between the number of Monte Carlo runs, where Y diff ( t) > Y diff (0) + θ * , and the total number of Monte Carlo runs (Fig. 6(B), blue dots), the result of which precisely fits with the calculated α(y). Finally, we obtain the overall FP detection probability Γ A from Eq. (6), which, in this pure diffusion test case, differs from Γ C by only 0.06%.

Validation of the iterative scheme
Before applying the iterative procedure (Fig. 4) to real data, we first validate it against time series generated by numerically integrating Eq. (1) (using the Euler-Maruyama scheme). We consider two validation cases: in Case 1 the amplitude of the jumps is comparable to the diffusive fluctuations and jumps are sparser in time (i.e., occur at a lower rate) than in Case 2, where jumps are much larger than the background noise and their rate is double that of Case 1 (Fig. 7(A)). The specific functions and parameters used to generate and analyze these validation data are shown in Table 1, which can be summarized as follows: low rate, low amplitude, high noise for Case 1, and high rate, high amplitude, low noise for Case 2. Preliminary tests with a linear drift function showed a successful fit between the fitted SDE and the numerical data. We now opt for a more general and arbitrary shape where the drift function is nonlinear and non-monotonic. The only restrictions are that it yields a single stable fixed point and that the resulting stochastic process is stationary. We thus restrict our analyses to drift functions that are mostly decreasing. Although the parameters and functions used for the simulations are known, they are not used in the inference procedure, only {X(t)} is. To assess the performance of the proposed method, we compare the estimatedD,λ,F, andQ B with their true values. We choose these two specific cases because they challenge both the sensitivity and the robustness of our method. In Case 1, the statistics of {X(t)} are not too different from those of a pure diffusion process, which makes it easier to estimate F, but difficult to extract Q B -(a(y -0.5) 3 + 0.5a(y -0.7) 2 + 0.1) t (s) 10 -2 10 -2 10 -2 amidst the FPs. In contrast, the jumps in Case 2 are well separated from the diffusive fluctuations, which allows for a more direct estimation of Q B . The presence of large jumps in this case, however, significantly alters {X(t)} and its PDF, making it harder to estimate F. In both cases, however, we find that the original SDEs can be precisely recovered by our method. For instance, simulating Eq. (1) withD,λ,F, andQ B of the last iteration not only produces time series that resemble the originals (Figs. 7(A) and 7(B)), but also yields an excellent fit between the reconstructed and original PDFs ( Fig. 7(C))), with an O(10 -4 ) root-mean-square error (normalized by the range of {X(t)}) and ACFs (Fig. 7(D)). Inspecting the results from the last iteration, we indeed see that the correct drift function is recovered (Fig. 8(A)). This is done through the use of the so-called differential Chapman-Kolmogorov equation. This is then used to calculate the next Q A and Γ A (Fig. 8(B)), which allows the correct Q B to be demixed from the measured Q C (Fig. 8(C)). We also obtain low relative errors when comparing our estimates and the correct values of D and λ ( Table 2). Note also that, although not shown here, we obtain the same fit quality between original and reconstructed when we apply our method to hybrid cases, where, for instance, the small jump amplitudes are paired with a higher rate instead of a lower one and vice-versa.

Application to experimental data
We now proceed with an application of our inference method to electrophysiological data published in Ref. [44]. These data consist of in vitro, intracellular recordings of membrane voltage fluctuations in pyramidal neurons of the weakly electric fish Apteronotus leptorhynchus. These fish are endowed with an active sensing mechanism whereby they generate a high frequency (∼700 to 1000 Hz) oscillatory electric field around their body. This electric signal, along with any distortions caused by objects, preys, and conspecifics, is sensed by electroreceptors located on the fish's body. This information is then sent to the hindbrain, where it reaches the first stage of electrosensory processing, called the electrosensory lateral line lobe (ELL). The recordings in Ref. [44] were taken from neurons of the ELL, specifically in the centrolateral and centromedial segments (CLS and CMS, respectively). In order to isolate the impact of voltage-gated ion channels on membrane potential fluctuations, the ELL was treated with pharmacological agents (CNQX and APV) that block synaptic transmission onto the pyramidal cells. The resulting fluctuations are thus fully attributed to cell-intrinsic sources, which we refer to as membrane noise. The main source of this type of noise is often assumed to be the stochastic opening and closing of ion channels, i.e., channel noise [45]. In this case, we cannot rule out other potential contributions, as non-trivial soma-dendrite interactions have been observed in these cells  19)). (C) After deconvolving the latter, we do recover the correct Q B (yellow curve). Note that in Case 2, the first and last iterations are confounded, as the true jump amplitudes are almost directly separable from those of FPs  [46]. Note also that, by imposing different holding currents on the cells, Ref. [44] recorded ongoing membrane noise at various levels of hyperpolarization relative to spike threshold. Of interest here is the presence of large, jump-like events, called blips, that abruptly depolarize the cells (Fig. 9(A), asterisks). Although Ref. [44] puts forth a hypothesis as to the

ms). (B) This similarity is confirmed by the close match between the data and simulation
PDFs (yellow curve and black dots, respectively). (C) Despite having no role to play in the inference procedure, the power spectrum of the data (black) also fits with that of the simulations (yellow). The notches in the power spectrum of the data result from the removal of experimental artifacts functional role for these blips, the mechanism underlying their occurrence is unknown. This, along with the limited amount of data, hinders the development of any meaningful mechanistic model of this phenomenon. The jump-diffusion inference approach developed here, however, is particularly well suited to circumvent this knowledge gap. Indeed, the resulting phenomenological models provide a useful tool for dynamically interpreting the available data without relying on poorly constrained biophysical mechanisms. For instance, we can address questions such as: Do certain parameters or functions of the model change as a function of the mean membrane potential?
Here we analyze recordings from two CMS and two CLS cells, each with five or six levels of hyperpolarization: from -25 to 0 mV below threshold, with 5 mV steps between levels. Using these relative levels with respect to spike threshold is required to compare cells that might have different thresholds (e.g., -67 to -63 mV for CMS cells [44]). After removing experimental artifacts (see Sect. 3.4), we obtain a total of 23 traces, each lasting approximately 10 s. Applying our inference method to these traces yields a good fit between the resulting simulations and the original data ( Fig. 9(A)): the PDFs differ only by O(10 -2 ) normalized root-mean-square errors, and the power spectra fall within 95% confidence intervals of each other (Figs. 9(B) and 9(C)).
Further insight can be gained by comparing the estimated SDE parameters and func-tionsD,λ,Q B , andF across all traces. We thus see, for instance, that CLS cells increase their jump rate (Fig. 10(A)), but not jump amplitudes (Fig. 10(B)), as they approach threshold. Note also that we could not measure any significant jump component for CMS cells. Instead, fluctuations in these cells are well described by pure diffusion. Furthermore, all cells show an increase in their diffusive noise intensity with depolarization, and this is  (Fig. 10(C)). Lastly, to compare the different drift functions with a scalar measure, we apply a linear fit to F (estimated as in the previous section) in the vicinity (±0.2 mV) of the stable fixed point. The slope parameter resulting from this fit can be interpreted as a measure of how wide or narrow the potential function is around the resting membrane voltage. Using this measure, we find no systematic intra-cell trend, but we do observe large differences between cell types: CLS cells have a wider potential function than CMS ones ( Fig. 10(D)).

Data processing
For each cell, the raw data consist of a continuous, 60 to 70 second staircase-like trace, sampled at 20 kHz. Each step lasts ∼10 s and corresponds to a different holding current, which was applied such as to create 5 mV hyperpolarization from the previous level. In order to segment the recordings into different traces for each level, we first identify the transition times between different holding currents. This is done visually, as the transitions are unambiguous, and we omit ±0.5 seconds around those times. At the -5 and 0 mV level, a few spikes (1 to 4) occur in the recordings. They are manually removed from the traces along with the ensuing refractory period.
A conspicuous aspect of the resulting traces is the presence of slow, large amplitude (1 Hz, ∼1 mV) quasi-oscillations overlaid with the faster, stochastic fluctuations. The exact source of this slow component is unknown, but is possibly related to persistent sodium channels, which have been shown to populate the soma and proximal apical dendrites of ELL cells [44] and to produce slow perithreshold oscillations in entorhinal stellate neurons [47]. In any case, these slow oscillations are outside the scope of the method presented here. A moving average filter (0.05 s window size) is thus applied to remove this low frequency content from the signal.
Line noise is removed at all 60 Hz multiples with a notch filter, but the data are also contaminated with artifacts in other frequency bands, potentially from interference with other sources. This is most prominent in the 900-3000 Hz, but also carries around lower frequencies, e.g., 100, 270, and 550 Hz. To account for this artifact, we opt for the combination of a low-pass filter with a 900 Hz cut-off, and 20 Hz wide band-stop filters centered on the other problematic frequencies. Electrophysiological recordings can be dominated by measurement noise at high frequencies [48]. In our case this is seen as a flattening out of the PSD above 1000 Hz, so the 900 Hz cut-off used here does not lead to the loss of important biological signals.
The end result of this processing chain are time series that exhibit fluctuations typical of jump-diffusion processes. We do observe, however, significant higher-order correlations on the smallest timescales (O( t), t = 50 μs). To quantify these correlations, we use the notion of the Einstein-Markov timescale [5]. This is a measure of the timescale below which the Markov property no longer holds. Stochastic time series often show a departure from the Markov property on small timescales, possibly due to noise source correlation, the presence of an inertial component in the dynamics, or measurement noise [5]. Following [5] and [49], we estimate this Einstein-Markov timescale by finding the value of τ that minimizes where x 1 = x(t), x 2 = x(t + τ ), x 3 = x(t + 2τ ), and σ 2 is the sum of the traces of the covariance matrices associated with the distributions in the numerator. For a proper Markov process, χ 2 = 0, ∀τ . In this case, we find the minimum of χ 2 at 1.2 ms, indicating that the Einstein-Markov timescale of the data is over one order of magnitude larger than the sampling interval t. This means that, on the time scale of individual observations, the data evolve with a history dependence that is incompatible with a Markovian description. If, however, we look at the data on a coarser time scale, e.g., the Markov-Einstein time scale of 1.2 ms, then the Markov property is approximately satisfied. In that case only can we hope to use Eq. (1) as a valid model for these data. To account for this problem, we resample the data at a 1.2 ms interval (∼830 Hz sampling rate) and obtain the final time series on which to apply our method ( Fig. 9(A), top), with the time step equal to the Markov-Einstein time scale. Note that this situation is conceptually similar to how the Langevin model of diffusion (where the position of a particle is, by itself, not Markovian) reduces to the Einstein model (where the position is Markovian) only above a certain time scale [43].

Discussion
In this study, we develop an iterative procedure that recovers the parameters and functions of a jump-diffusion SDE, based solely on a realization of the associated stochastic process. This approach is validated when the jumps are comparable in size to the diffusive fluctuations (Case 1), as well as when they are much larger than diffusive fluctuations (Case 2). We apply this method to membrane voltage fluctuations recorded in pyramidal neurons of electric fish. Our analysis reveals that these data can indeed be represented as jump-diffusion processes. We find that pyramidal neurons increase their jump rate and noise intensity as they approach spike threshold, while their jump amplitudes and drift function remain unchanged. Our approach relies on five main components: the use of the differential Chapman-Kolmogorov equation to estimate the drift function, the use of quadratic variation on jump-free segments to estimate the diffusive noise intensity, the detection of jumps via threshold-crossing of the increments, the modeling of detected jumps as a mixture of true jumps and FPs, and the calculation of FP statistics used to extract true jump statistics from the detected jump pool.
Although we estimate the drift function and the true jump amplitude distribution nonparametrically, we do limit our study to the case of additive diffusive noise, of constant jump rate, and of Poisson jumps. Relaxing the additive noise assumption would require an estimation scheme for the diffusion function D(y). For purely diffusive processes, this function can be obtained directly through the estimation of the second Kramers-Moyal coefficient, which is defined in terms of the second conditional moment of the increments. Evaluating this moment simply requires the knowledge of the conditional PDF across time steps. For a Poisson jump-diffusion process, however, Ref. [31] has shown that the diffusion function can in fact be expressed in terms of the second conditional moment of the increments, the jump rate, and the second moment of the jump amplitudes. It should thus be possible to include the estimation of D(y) into the iterative portion of our method (Fig. 4). Indeed, estimates of the jump rate and of the amplitude distributions could be used at each pass to estimate the diffusion function. Furthermore, we have limited our analysis to noise intensities for which jump amplitudes are on average an order of magnitude or more larger than diffusive fluctuations. When diffusive fluctuations and jumps are of similar average magnitude, the number of detected FPs becomes too large and estimates of λ and Q A become imprecise due to increased statistical fluctuations. A much finer temporal resolution would be necessary to address this particular case.
As for the assumption of constant jump rate, it should be possible to extract a rate function λ(y) as long as a y-dependent version of Eq. (17) can be written. This would require a long enough data time series such as to produce an estimate of Γ C (y). Relaxing the assumption of Poisson jumps, however, would be more difficult to do. The detection probability of true jumps, Γ B = λ t, would obviously need to be modified with the appropriate expression. Moreover, the specific form of the differential Chapman-Kolmogorov used here, Eq. (24), relies on the assumption of Poisson jumps (see the Appendix) and would thus need to be extended in a manner that depends on the precise non-Poissonian nature of the jump process. More specifically, the last two terms in Eq. (24), which are originally defined based on the transition rates of the Poisson jump process, would now be derived from the modified Γ B . Note that, for the special case of true jumps with zero-mean amplitudes, the drift function can be estimated directly from the first conditional moment of the increments, without relying on Eq. (24) [31].

Membrane noise
The unusual characteristics of membrane noise observed in CLS neuron, initially reported in Ref. [44] and represented here as jump-diffusion SDEs, might be implicated in novel ways in electrosensory processing. The analysis we perform here is a first step toward investigating this possibility computationally.
The positive impact of noise on information processing in neural systems is widely recognized [50]. Although channel noise was initially thought to be too weak compared to synaptic noise to influence a neuron's output statistics [51,52], it has since been shown to significantly impact neuronal reliability and action potential timing [47,[53][54][55][56][57]. Because it arises from the stochastic opening of voltage-gated ion channels, channel noise has been successfully modeled by populations of Markov chains with voltage-dependent transition rates [45]. In the quest for more computationally efficient models, however, various approximations have been used to model the collective behavior of these Markov chains as simple diffusion SDEs [57][58][59][60] (sometimes called the diffusion approximation for channel noise [61,62], in reference to the diffusion approximation c for synaptic bombardment). One of these approaches, for instance, introduces a current noise term directly in the membrane equation [57,63], effectively modeling the subthreshold voltage locally as an Ornstein-Uhlenbeck process. It is perhaps not surprising then that we obtain a good match between the observed CMS membrane noise (at various holding potential) and a pure diffusion SDE. In cases where membrane noise is more accurately described by multiplicative conductance noise [57], however, we might expect deviation from the simple SDE used here, similar as to how the diffusion approximation can misrepresent the subthreshold voltage distribution for certain types of synaptic drive [64].
We cannot completely exclude the possibility that small, hard to detect blips occur in CMS cells. Although our method proves capable of handling this type of situation (Sect. 3.2), the limited amount of available experimental data in this case precludes us from conclusively ruling out the existence of a jump component in models of CMS membrane noise. In addition, in both types of cells we find a positive correlation between the noise intensity and the holding potential (Fig. 10). This is consistent with how membrane potential variance has been observed to increase with depolarization in these same cells [44], as well as in rat neocortical pyramidal neurons [65]. Simple Markov models involving only Na + and K + channels are able to reproduce this correlation [66].
The presence of blips in CLS neurons suggests that ion channels co-activate to produce abrupt depolarizing currents. It was shown in [67] that sodium channels appear in clusters, or hot spots, in ELL pyramidal cells. This might allow local depolarization of the membrane to sufficiently couple channels within a cluster. Alternatively, perhaps channels are physically and functionally coupled through scaffolding protein complexes, as observed in [68]. Regardless of the exact coupling mechanism, the commonly used assumption of independence between channels [45,57] is likely violated by these blips. The fact that we have successfully fitted a jump-diffusion SDE to CLS membrane noise suggests that a diffusionlike approximation could be applied in this case as well. Such a deductive approach would, however, require tentative descriptions of local channel coupling to be included in the kinetic schemes.
Given the unknown biophysical mechanism underlying the blips, our fitted jumpdiffusion model is uniquely positioned to address questions related to their functional role. Future work will thus aim to incorporate our fitted jump-diffusion model as a membrane noise term in a more complete model of CLS cells [69]. By accounting for the synaptic input associated with electrosensory input, the resulting model could investigate the possibility that blips assist or influence spiking, perhaps through a stochastic resonancelike phenomenon. Stochastic resonance, and more generally stochastic facilitation, has been shown to be mediated by channel noise in models of auditory brain stem neurons [51] and in modeled neuronal arrays [70], as well as to be mediated by synaptic noise in models of neocortical pyramidal neurons [71]. Since blips share a similar shape and amplitude as AMPA-driven excitatory post-synaptic potential, this begs the question of whether stochastic resonance is at play in the detection of weak electrosensory signals, such as small prey. This was indeed hypothesized, but not explicitly shown, in Ref. [44]: the voltage-dependence of membrane noise makes it impossible to vary the noise level, the rate of occurrence, or the amplitude of blips independently of the membrane potential. The value of this hypothesis, however, could be assessed by performing a numerical experiment with our fitted SDE representation of blip-laden membrane noise.
Lastly, although we apply our method here to recordings where synaptic input is completely blocked, it might be applicable to certain type of synaptic input patterns, such as correlated bombardment. We also hope to apply this method to fluctuations of the activesensing rate of pulse-type electric fish, where jump-like events also occur [72].