Skip to main content

Cross-Correlations and Joint Gaussianity in Multivariate Level Crossing Models

Abstract

A variety of phenomena in physical and biological sciences can be mathematically understood by considering the statistical properties of level crossings of random Gaussian processes. Notably, a growing number of these phenomena demand a consideration of correlated level crossings emerging from multiple correlated processes. While many theoretical results have been obtained in the last decades for individual Gaussian level-crossing processes, few results are available for multivariate, jointly correlated threshold crossings. Here, we address bivariate upward crossing processes and derive the corresponding bivariate Central Limit Theorem as well as provide closed-form expressions for their joint level-crossing correlations.

1 Introduction

Various phenomena in the biological or physical sciences are amenable to the description by level crossings of random Gaussian processes [1, 2]. Examples of these phenomena are spike coordination of neurons in the brain [3], insurance risk assessment [4] and stress levels generated by ocean waves [5]. Therefore a number of mathematical studies in recent decades have focused on the statistical properties of level crossings arising from stationary Gaussian processes [2]. However, largely this literature addresses the properties of one level-crossing process and rarely deals with the coordinated level crossings of multivariate Gaussian processes. A prominent application where correlated level crossings are of particular importance is neuroscience. Recent work has shown that the spikes of a cortical neuron can be approximated by a Gaussian level-crossing process [3, 6]. The assumption of Gaussianity is prompted by the experimental observation that cortical neurons are on average connected to 10000 neurons and therefore receive a barrage of inputs that together lead to a near-Gaussian fluctuation at the cell body of any given cortical neuron [7]. The spikes of two neurons are then modeled as upward level-crossing times of two cross-correlated fluctuating Gaussian potentials.

In this article we aim to address two features of level crossings of multiple correlated Gaussian processes. First, we want to clarify whether level-crossing counts derived from multiple correlated processes are jointly Gaussian. Second, we want to understand how many more coincident level crossings in a given time instance are expected if the underlying Gaussian random processes are correlated. Let us provide an intuitive reason for these questions. Starting with the first question, we recognize that if level-crossing counts of two neurons were jointly Gaussian, then a simple measure of dependence is the covariance or the Pearson correlation coefficient. Measuring a vanishing correlation coefficient or vanishing covariance between two neuronal spike counts would in this case imply true statistical independence, because only in the case of multivariate Gaussian distribution is it permissible to conclude independence from vanishing count correlations. This implication is not permissible if the marginal distributions are not Gaussian or are Gaussian but the joint distribution is not a multivariate Gaussian distribution. While marginal Gaussianity has been shown for level-crossing counts in [2] for large bin sizes, joint Gaussianity is still an open question. It might seem natural to imply joint Gaussianity from marginal Gaussianity for multivariate level-crossing processes, however, numerous counter examples exist to prove this intuition wrong, see Sect. 5 in [8]. Here, we use a modified Breuer–Major Theorem to prove joint Gaussianity and show that any linear combination of level-crossing counts of the two processes is also Gaussian.

The second question we address in this article deals with the conditional probabilities of two level-crossing processes. We are interested in how the level crossings of one Gaussian process can be used to predict the level-crossing probability of the partner process in a specific time interval relative to the observed level crossing in one process. In neuroscience, coordinated neuronal firing drives changes in synaptic connectivity and calculating the spike count dependencies across neurons is therefore a topic of current research efforts (e.g. Chap. 8 in [9]). The available mathematical results for conditional upward crossings in Gaussian processes currently comprise mostly variance and moments for one level-crossing process (see Chaps. 3–5 in [2]) as well as the low and high correlation limit in pairs of processes [3, 10]. As yet, a comprehensive closed-form solution covering the complete level-crossing cross-correlation function is currently lacking. Here, we use a regression approach to derive, for all correlation strengths, the conditional level-crossing correlation functions in two continuous Gaussian processes. We hypothesize that the level-crossing correlations we provide in this article could also be valuable in other fields outside of neuroscience for example in risk assessment calculations to predict the risk of joint default for insurance purposes.

The article is structured as follows. In Sect. 2 we define the mathematical model setting and introduce the concept of level crossings and specifically the upward crossings. In Sect. 3 we use a regression approach to obtain a general closed-form solution for cross-correlations of level crossings in two correlated Gaussian processes. In Sect. 4 we prove the joint Gaussianity (Central Limit Theorem) for the correlated joint upward crossings for two correlated Gaussian processes. In the section on materials and methods (Sect. 6) we provide detailed derivations of the reported results. We assume throughout this article that both level-crossing processes arise from crossings of the same threshold level by two Gaussian processes with different variances. This is permissible because the number of level crossings, the Rice rate [11], depends only on the variance-to-threshold ratio, but not on these quantities individually. We therefore work with a pair of level-crossing processes where each process has a unique voltage variance and therefore the rate of crossings in the two neurons being considered are, unless stated otherwise, not the same. Let us note that this assumption is prompted by the observation that in a living brain typically no two neurons are identical in all their properties and differ at least in their firing rate.

2 Mathematical Definitions of Multivariate Level Crossings

Here we address the statistics of coincident level crossings arising from two Gaussian processes that share a common latent source. This situation is illustrated in Fig. 1(a). We choose to illustrate the situation using a neuroscience perspective. Neurons in a mammalian brain receive synaptic inputs, both excitatory and inhibitory, from thousands of other neurons. Particularly in the visual cortex, the excitatory and inhibitory inputs largely cancel each other and lead to a net fluctuating residual current at the cell body of each neuron. These residual fluctuations drive the spikes of individual neurons. These voltage fluctuations arise from largely independent inputs so they are well approximated by a random Gaussian process with temporal correlations determined by the temporal structure of synaptic currents [7].

Fig. 1
figure 1

Cross-correlations in the Gaussian variables lead to correlations of coincident level crossings. a Spike correlations can arise from common input in a neuronal network. b We consider coincident level crossings arising from two Gaussian processes that share a common latent source. Whenever the voltage crosses a threshold ψ from below a spike is emitted. Spikes are indicated by vertical solid lines. The vertical dotted lines indicate the width of a time bin T used to compute spike counts U [ 0 , T ] V i (ψ), i=1,2

2.1 Definitions of Multivariate Voltage Distributions

We begin by defining the random, temporally correlated Gaussian zero mean process V j (t) which represents the voltage of a neuron j

V j (t)= e i t λ f V 1 / 2 (λ) ( 1 r d W j ( λ ) + r d W c ( λ ) ) ,
(1)

where f V is a combination of filters f V (λ)=γ(λ)g(λ), where γ represents the membrane filter and g the synaptic filter. Both of these filters can be chosen freely, but their combination should guarantee a continuously differentiable voltage trajectory. W j with j={1,2} and W c are complex random measures with independent increments, such that for all Borelian sets AB(R) we have E | W ( A ) | 2 =m(A), the Lebesgue measure of A. By W c we denote the common noise component. Moreover if AB= then W(A) and W(B) are independent Gaussian random variables. The correlation strength r, r=[0,1), denotes the presynaptic overlap of neurons 1 and 2 generated in a neuronal network, and it is illustrated in Fig. 1. If r=0 the voltages V 1 and V 2 are independent if r=1 the voltages V 1 and V 2 are identical. The auto- and cross-correlation functions between V i and V j are, respectively

C V j (τ)= V j ( 0 ) V j ( τ ) = σ V j 2 c(τ),
(2)
C V j k (τ)= V j ( 0 ) V k ( τ ) =r σ V j σ V k c(τ)for j,k{1,2},
(3)

where c(τ)= e i τ λ f V (λ)dλ, and τ is the considered delay. The vector ( V 1 (0), V 1 (0), V 2 (τ), V 2 (τ)) comprising the voltages and their derivatives is Gaussian and has the covariance matrix

Σ(τ)=( σ V 1 2 0 Σ 13 ( τ ) Σ 14 ( τ ) 0 σ V 1 2 Σ 14 ( τ ) Σ 24 ( τ ) Σ 13 ( τ ) Σ 14 ( τ ) σ V 2 2 0 Σ 14 ( τ ) Σ 24 ( τ ) 0 σ V 2 2 ),
(4)

where the variances are σ V j 2 = C V j (0), σ V j 2 = C V j (0) and covariance functions are given by

Σ 13 ( τ ) = V 1 ( 0 ) V 2 ( τ ) = r σ V 1 σ V 2 c ( τ ) , Σ 14 ( τ ) = V 1 ( 0 ) V 2 ( τ ) = r σ V 1 σ V 2 c ( τ ) , Σ 24 ( τ ) = V 1 ( 0 ) V 2 ( τ ) = r σ V 1 σ V 2 c ( τ ) .

We use the correlation time τ s to quantify the width of the correlation function c(τ):

τ s = c ( τ ) / | c ( τ ) | .
(5)

If the filters γ(λ) and g(λ) are classic low-pass filters, then the correlated voltage processes of Eq. (1) can be written in a differential form for each neuron j:

τ M V j (t)= V j (t)+ I j (t),
(6)

where I j (t) is the residual Gaussian current fluctuation with variance σ, τ M the membrane time constant of the neuron, e.g. [1215]. The synaptic drive I j (t) can be separated into two parts: a common and an individual noise component

I j (t)= e i t λ C ξ ( λ ) ( 1 r d W i ( λ ) + r d W c ( λ ) ) ,
(7)

where C ξ (λ) is the synaptic noise spectral density. Using Eq. (6) we obtain the following spectral representation for the stationary solutions:

V j (t)= σ i e i t λ C ξ ( λ ) ( 1 + i τ M λ ) ( 1 r d W i ( λ ) + r d W c ( λ ) ) .

In this form the spectral density of each V j is given by f V (λ)= C ξ (λ)/(1+ τ M 2 λ 2 ). Analogously to Eq. (3), we obtain

E [ V 1 ( 0 ) V 2 ( τ ) ] =r σ 1 σ 2 e i τ λ C ξ ( λ ) ( 1 + τ M 2 λ 2 ) dλ=r σ 1 σ 2 c(τ).
(8)

2.2 Upward Crossing Definitions

Neurons communicate using brief pulses, the so called spikes, which are emitted whenever a voltage threshold is crossed [9]. The integrate-and-fire-type neuron models that are frequently used in computational neuroscience [1215] generate a spike in neuron j any time a voltage V j (t) crosses a fixed threshold ψ and subsequently reset the voltage to a reset potential. Recently, it has been shown that in many physiologically relevant cases the leaky integrate-and-fire model can be equivalent to a level-crossing model without reset, where spikes are modeled as positive threshold crossings and are not followed by a reset [3, 6, 16]. Here, we therefore identify the spikes of a neuron j with the positive level crossings of its voltage V j (t) and quantify the cross-correlation between level crossings in neurons 1 and 2 by the following level functional:

U Q ( ε ) ( ψ ) = lim δ 0 1 ( 2 δ ) 2 Q ( ε ) 1 { | V 1 ( s 1 ) ψ | < δ } V 1 ( s 1 ) 1 { V 1 ( s ) 0 } × 1 { | V 2 ( s 1 + s 2 ) ψ | < δ } V 2 ( s 1 + s 2 ) 1 { V 2 ( s 1 + s 2 ) 0 } d s 1 d s 2 ,
(9)

where Q(ε)= I 1 × I 2 is a bounded and finite rectangle in R 2 , ψ denotes the voltage threshold in both neurons (see Fig. 1). Here δ is introduced to quantify the infinitesimal interval around the threshold ψ where a spike takes place. We choose the same threshold for both neurons and two different variances ( σ V 1 σ V 2 ) and keep all other parameters the same. σ V 1 σ V 2 represents the biological situation in which two neurons of the same neuronal type could have differences in the strength of their synaptic input and threshold-to-variance ratio but are exposed to the same temporal background statistics. We will consider the following random field Z: R 2 ×Ω R 2 , defined as ( s 1 , s 2 )Z( s 1 , s 2 )=( V 1 ( s 1 ), V 2 ( s 1 + s 2 )). Ω denotes the probability space; here Ω is the Gaussian probability space. The field Z is Gaussian and Z( s 1 , s 2 ) and Z(0, s 2 s 1 ) are equal in distribution. We denoted by p s 2 s 1 (,) the bivariate Gaussian density of vector ( V 1 ( s 1 ), V 2 ( s 2 )). If Q(ε)=[t,t+ε]×[τ,τ+ε] and the prerequisites of Theorem 6.2 in [2] are fulfilled we can write

E [ U Q ( ε ) ( ψ ) ] = E [ # { ( s 1 , s 2 ) Q ( ε ) : V 1 ( s 1 ) = ψ , V 2 ( s 1 + s 2 ) = ψ , 1 V 1 ( s 1 ) 0 , 1 V 2 ( s 1 + s 2 ) 0 } ]
(10)
= Q ( ε ) E [ | det Z ( s 1 , s 2 ) | , 1 V 1 ( s 1 ) 0 , 1 V 2 ( s 1 + s 2 ) 0 | Z ( s 1 , s 2 ) = ( ψ , ψ ) ] × p s 2 s 1 ( ψ , ψ ) d s 1 d s 2
(11)
= ε τ τ + ε E [ | det Z ( 0 , s 2 ) | 1 V 1 ( 0 ) 0 , 1 V 2 ( s 2 ) 0 | Z ( 0 , s 2 ) = ( ψ , ψ ) ] × p s 2 ( ψ , ψ ) d s 2
(12)
= ε τ τ + ε E [ | V 1 ( 0 ) | | V 2 ( s 2 ) | 1 V 1 ( 0 ) 0 , 1 V 2 ( s 2 ) 0 | V 1 ( 0 ) = ψ , V 2 ( s 2 ) = ψ ] × p s 2 ( ψ , ψ ) d s 2 ,
(13)

where the expectation value is denoted by , and det(Z( s 1 , s 2 )) is the determinant of the correlation matrix for the vector field Z( s 1 , s 2 ). Now, we are left to prove the conditions of Theorem 6.2 in [2]. First, we find that conditions (i) and (ii) of Theorem 6.2 are satisfied by definition. Condition (iii) holds because p s 2 (ψ,ψ) is not degenerate. If we let I 1 and I 2 be two finite and bounded intervals in , condition (iv) is satisfied because

P { ( s 1 , s 2 ) I 1 × I 2 : Z ( s 1 , s 2 ) = ( ψ , ψ ) , det Z ( s 1 , s 2 ) = 0 } P { s 1 I 1 : V 1 ( s 1 ) = ψ , V 1 ( s 1 ) = 0 } + P { s 2 I 2 : V 2 ( s 2 ) = ψ , V 2 ( s 2 ) = 0 } .

Here denotes the probability measure. We can define the correlation of two spike trains as

s 1 ( t ) s 2 ( t + τ ) : = lim ϵ 0 E [ U Q ( ε ) ( ψ ) ] ϵ 2 = E [ V 1 ( 0 ) 1 { V 1 ( 0 ) 0 } V 2 ( τ ) 1 { V 2 ( τ ) 0 } | V 1 ( 0 ) = ψ , V 2 ( τ ) = ψ ] × p τ ( ψ , ψ )
(14)
= 0 0 v ˙ 1 v ˙ 2 p τ (ψ, v ˙ 1 ,ψ, v ˙ 2 )d v ˙ 1 d v ˙ 2 ,
(15)

where p τ (ψ, v ˙ 1 ,ψ, v ˙ 2 ) is the joint Gaussian density of the vector ( V 1 (0), V 1 (0), V 2 (τ), V 2 (τ)). The conditional firing rate ν cond (τ) then is

ν cond (τ)= s 1 ( t ) s 2 ( t + τ ) / ν 1 ν 2 ,
(16)

where ν j = σ V j 2 π σ V j exp( ψ 2 2 σ V j 2 ) is the firing rate of a neuron j, for j=1,2. In the next sections we provide closed-form expressions for s 1 (t) s 2 (t+τ) and ν cond (τ).

3 Cross-Correlations of Two Upward Level Crossings

Here, we address s 1 (t) s 2 (t+τ) and provide a closed-form solution that is valid for any cross-correlation strength r between two level-crossing processes and any time delay τ.

Proposition 3.1 Following the steps outlined in the methods section, Sect. 6.1, we can apply a regression model and Mehler’s Formula (see Lemma  10.7 in [2]) to prove that

s 1 ( t ) s 2 ( t + τ ) = C ( a , b ) (τ) p τ (ψ,ψ),
(17)

where p τ (,) is a joint Gaussian distribution for voltages V j as defined in Sect. 2.2 and C ( a , b ) (τ) is the series given by

C ( a , b ) ( τ ) = [ b Φ ¯ ( b ) ϕ ( b ) ] [ a Φ ¯ ( a ) ϕ ( a ) ] σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) + Φ ¯ ( a ) Φ ¯ ( b ) Cov ( ϵ 1 , ϵ 2 ) ( τ ) + n 2 ϕ ( a ) ϕ ( b ) H n 2 ( a ) H n 2 ( b ) Cov ( ϵ 1 , ϵ 2 ) ( τ ) n n ! ( σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n 1 ,
(18)

where

σ ϵ 1 (τ)= ( σ V 1 2 ( α 1 2 σ V 1 2 + 2 α 1 α 2 Σ 13 ( τ ) + α 2 2 σ V 2 2 ) ) 1 / 2 ,
(19)
σ ϵ 2 (τ)= ( σ V 2 2 ( β 1 2 σ V 1 2 + 2 β 1 β 2 Σ 13 ( τ ) + β 2 2 σ V 2 2 ) ) 1 / 2 ,
(20)
Cov ( ϵ 1 , ϵ 2 ) (τ)= Σ 24 (τ) ( β 1 α 1 σ V 1 2 + ( α 1 β 2 + α 2 β 1 ) Σ 13 ( τ ) + α 2 β 2 σ V 2 2 ) ,
(21)

and

a= ( ψ ( α 1 + α 2 ) ) σ ϵ 1 ( τ ) ,b= ( ψ ( β 1 + β 2 ) ) σ ϵ 2 ( τ ) ,

with

α 1 = Σ 14 ( τ ) Σ 13 ( τ ) σ V 1 2 σ V 2 2 Σ 13 ( τ ) 2 , α 2 = Σ 14 ( τ ) σ V 1 2 σ V 1 2 σ V 2 2 Σ 13 ( τ ) 2 , β 1 = σ V 2 2 Σ 14 ( τ ) σ V 1 2 σ V 2 2 Σ 13 ( τ ) 2 , β 2 = Σ 14 ( τ ) Σ 13 ( τ ) σ V 1 2 σ V 2 2 Σ 13 ( τ ) 2 .

ϕ and Φ are the standard Gaussian density and distribution, respectively, and Φ ¯ =1Φ. H n (z)= ( 1 ) n d n d z n ( e z 2 / 2 ) e z 2 / 2 are the Hermite polynomials. Note that the first two terms in Eq. (18) correspond to truncation orders n=0 and n=1, respectively.

To aid the explicit evaluation of C ( a , b ) (τ) in Eq. (18) we provide code for a computer algebra system. Footnote 1 Figure 2(a), (b) demonstrate ν cond (τ) obtained using Eq. (17) for progressively large truncation orders n. Notably, we find close correspondence between the first truncation order n=1 and the large n limit (n=10). Figure 2(c), (d) show ν cond (τ) vs. τ as in Eq. (17) for varying correlation strength r. For simplicity, we chose c(τ)=cosh ( τ / τ s ) 1 and r[0,1). We note that ν cond (τ) for two identical neurons ( σ V 1 = σ V 2 ) is a symmetric function while for a pair of neurons with different rates ( σ V 1 σ V 2 ), ν cond (τ) is asymmetric.

Fig. 2
figure 2

Convergence of pairwise level crossing correlations. a Spike correlation function ν cond (τ) vs. time lag τ for different truncation orders n in Eq. (17); r=0.7, σ V 1 = σ V 2 =10 mV, τ s =20 ms, ν=5 Hz, ψ=9.64 mV. b ν cond (τ) vs. τ for a pair of rate heterogeneous neurons with r=0.7, σ V 1 =10 mV, σ V 2 =5 mV, τ s =20 ms, ν 1 =5 Hz, ν 2 =1.24 Hz, ψ=9.64 mV. c ν cond (τ) vs. time lag for varying correlation strengths r in a pair of neurons with σ V 1 = σ V 2 =10 mV, τ s =20 ms, ν=5 Hz, ψ=9.64. d ν cond (τ) vs. time lag for varying correlation strengths r, in a pair of rate heterogeneous neurons with σ V 1 =10 mV, σ V 2 =5 mV, τ s =20 ms, ν 1 =5 Hz, ν 2 =1.24 Hz, ψ=9.64. The truncation order of ν cond (τ)-series in Eq. (17) in cd is n=10. In ad the filled circles at τ=0 indicate the predicted ν cond (0) (as in Eq. (23)) and colored squares denote the corresponding numerical simulations obtained with N=2000 independent realizations of 20 s length

Let us now briefly discuss the result we obtained in Eq. (17) within the context of previous level-crossing literature. One of the closely related results is the variance of level crossings and maxima provided in Proposition 4.4 in [2]. However, this result is derived for one level-crossing process, while we addressed a pair of level-crossing processes. For multiple cross-correlated processes orthant probabilities that describe expressions of the form P( V 1 (t)> ψ 1 , V 2 (t)> ψ 2 ) have been obtained, e.g. Lemma 4.3 on p. 78 in [2]. However, specific results for the cross-correlations of upward crossings are sparse. For two correlated upward level-crossing processes previous studies have addressed the limiting cases of weak (r0) or strong (r1, r<1) cross-correlations [3, 10]. However, to address upcrossing correlations in the intermediate regimes where neither r0 nor r1 no analytical methods are available. Therefore, it was previously necessary to numerically evaluate the associated Gaussian probability densities in Eq. (15). The direct numerical evaluation of multidimensional Gaussian integrals can be computationally and algorithmically demanding, requires adaptive grid procedures and its accuracy can be hard to evaluate [17]. For the specific case of τ=0 we show in the materials and methods section on ‘Zero time lag correlations’, Sect. 6.2, that a direct evaluation of the Gaussian double integral is possible via a variable substitution. The key to this variable substitution method was a manageable unity correlation matrix. For any other finite τ>0 and a given finite correlation strength 0r<1 we could not identify a transformation that leads to a tractable integral and we therefore derived the series expansion in Eq. (17). This solution is explicit such that each series term of order n can be evaluated and studied separately. Furthermore, Eq. (17) consists of analytical functions with a well-studied parameter dependence. This makes it possible to predict the influence of a specific parameter, such as time scale τ s , firing rate ν i or correlation strength r on the upward level-cross correlations. As an example, we evaluate Eq. (18) for an identical pair of neurons up to the third order in r via a Taylor expansion. We obtain

ν cond ( τ ) = ν + r ν ( c ( τ ) ψ 2 / σ V 2 π τ s 2 c ( τ ) / 2 ) + ν r 2 2 [ c ( τ ) 2 ( ψ 2 σ V 2 1 ) 2 + τ s 2 c ( τ ) ( c ( τ ) τ s 2 π c ( τ ) ψ 2 σ V 2 ) + τ s 2 c ( τ ) 2 ( ψ 2 σ V 2 ( 2 π ) 2 ) ] + ν r 3 [ c ( τ ) 3 3 ! ( ψ 3 σ V 3 3 ψ σ V ) 2 + c ( τ ) c ( τ ) τ s 2 ( c ( τ ) τ s 2 ψ 2 2 π 4 c ( τ ) ( ψ 2 σ V 2 1 ) 2 ) τ s 2 c ( τ ) c ( τ ) 2 ( π 2 ( ψ 4 σ V 4 + 1 ) + ψ 2 σ V 2 ) ] .
(22)

We recognize that the linear and quadratic expressions are equivalent to the first and second order r-expansion reported in [3, 10]. The cubic term has not been reported before, to the best of our knowledge. This demonstrates consistency with previous results and illustrates that expansions of any order can be obtained via Eq. (18). In this context, it is desirable to have an exact reference point for deciding how many n- orders are necessary for Eq. (17) to be sufficiently accurate. Such a reference point can be the zero lag value which we calculate exactly. Deviation of Eq. (17) for a specific n from this reference point can serve as an accuracy benchmark. Following the calculations in the methods Sect. 6.2, we derive

ν cond ( 0 ) = 1 + ( 2 r arctan ( ( 1 + r ) / ( 1 r ) ) ) / 1 r 2 4 π 2 ν 1 ν 2 τ s 2 × exp ( ψ 2 4 σ V 1 2 σ V 2 2 [ ( σ V 1 + σ V 2 ) 2 1 + r + ( σ V 2 σ V 1 ) 2 1 r ] ) .
(23)

Figure 2(a), (b) demonstrates ν cond (τ) obtained using Eq. (17) for different truncation orders n alongside the zero lag correlation ν cond (0). Figure 2(c), (d) demonstrates ν cond (τ) obtained using Eq. (17) as a function of the correlation strength r alongside the zero lag correlation ν cond (0). As previously, we chose c(τ)=cosh ( τ / τ s ) 1 and r[0,1). We note that for two identical neurons ( σ V 1 = σ V 2 ) ν cond (τ) is a symmetric function. Yet, for a pair of neurons with different rates ( σ V 1 σ V 2 ) the spike correlation function ν cond (τ) is asymmetric, indicating that the lower rate neuron spikes on average after the higher rate neuron.

3.1 Relation to the Leaky Integrate-and-Fire Model

Here, we address the relation between spike statistics and spike correlations in the level-crossing setting in our article and previous results in the leaky integrate-and-fire framework [1215]. In a current-based leaky integrate-and-fire model driven by fluctuating noise the voltage of a neuron V j (t) is given by

τ M V j (t)= V j (t)+ I j (t),
(24)
τ I I j (t)= I j (t)+σξ(t),
(25)

where I j (t) is the input current of a neuron, ξ(t) a white noise, unit variance drive. The voltage power spectrum for this model is a combination of low-pass filters f V (λ) [ ( 1 + τ M 2 λ 2 ) ( 1 + τ I 2 λ 2 ) ] 1 and its correlation function can be determined according to Eqs. (8). If the voltage V j reaches the threshold ϕ the neuron j emits a spike and the voltage is subsequently reset to a reset value V r . The integrate-and-fire model differs only in one important detail from the level-crossing approach—the presence of a reset after a spike. A recent article by Laurent Badel systematically compared the validity of upward level-crossing approximation for the firing rate, spike correlations and frequency response of a leaky integrate-and-fire neuron [16]. This study reached the conclusion that the upward level-crossing approach accurately represents the leaky integrate-and-fire model if two conditions are fulfilled: (1) the firing rate is much lower than the typical relaxation time of the voltage, (2) the synaptic filtering time constants remain of the same order of magnitude as the membrane time constant ( τ I / τ M 1). Numerically, the validity of the approximation remained highly accurate even for synaptic time constants 0.4 τ I / τ M 2.6.

A number of spike correlation results have been derived in the leaky integrate-and-fire model for the limit of weak correlations [1820]. They include the observation that the spike correlation coefficient increases with firing rate [18, 19]. The equivalent firing rate dependent increase in spike correlations and correlation coefficients for low correlation strengths has been reported for the level-crossing model, see [3] and Fig. 3(A) (right) and Fig. 2(A) (top) in [10]. Furthermore, leaky integrate-and-fire model exhibits a sublinear dependence of correlation coefficients on input strength r [18, 21], which we see confirmed in Fig. 4.

4 Joint Gaussianity of Upcrossing Counts

Spike count cross correlations and correlation coefficient measurements in pairs of neurons are ubiquitous in neuroscience and are often used to measure the strength of interdependencies in a pair of neurons, e.g. in cortical neurons [18, 19, 22], in model neurons [23] and in theoretical and experimental studies of net correlations emerging in recurrent networks [2328]. Spike counts and their cross correlations in neuroscience are often computed for a variety of bin sizes varying from T=0.11 ms [22] to T=2 s [29]. Here, we are interested in the question when spike count correlations of two neurons computed in a bin size T are jointly Gaussian such that their cross correlations are unbiased measures of statistical dependence or independence.

In this section we will prove that the spike counts of two neurons, approximated by up crossings of a Gaussian process, approach a joint multivariate Gaussian distribution for large bin sizes T. We start by considering the marginal distributions of upcrossing counts. From the one-dimensional Central Limit Theorem proven in [2] we know that 1 T ( U [ 0 , T ] V j (ψ)E[ U [ 0 , T ] V j (ψ)]) converges for T to a one-dimensional centered normal variable with finite variance. We provide a direct illustration of this classical univariate result in Fig. 3(a). Now, it is tempting to conclude that because the distribution of counts in each neuron is Gaussian, the joint distribution for the vector ( U [ 0 , T ] V 1 (ψ), U [ 0 , T ] V 2 (ψ)) is also a multivariate Gaussian distribution. Yet, this conclusion is mathematically forbidden. While a joint Gaussian distribution implies marginal Gaussianity it is general not possible to inverse this relation [8]. The joint Gaussianity of spike counts is a highly desired property. If two counts are jointly Gaussian zero count correlation directly implies statistical independence. If count correlations between neuron 1 and neuron 2 are zero such that U [ 0 , T ] V 1 (ψ) and U [ 0 , T ] V 2 (ψ) are uncorrelated, then only if the vector ( U [ 0 , T ] V 1 (ψ), U [ 0 , T ] V 2 (ψ)) is from a multivariate Gaussian distribution is it possible to infer independence of neuron 1 and neuron 2. Let us consider a teaching counter example where vanishing correlation between the variables X and Y does not imply independence: XN(0,1) and Y= X 2 . We obtain Cov(X,Y)=0, but the two random variables are strongly linked. Contrasting examples of where X and Y variables are both marginally but not jointly Gaussian, have a zero correlation but are not independent can be found in Sect. 5 in [8]. To benefit from joint Gaussianity and be able to infer true statistical independence from vanishing correlations, we prove here the joint Gaussianity of upward level crossings/spike counts for large T. In the following we derive the Central Limit Theorem for the spike counts of two neurons, using two steps. First, we use the one-dimensional Central Limit Theorem proven in [2]. Second, we apply a version of the Breuer–Major Theorem adapted to our upward crossing setting which we present in Sect. 6.4.

Fig. 3
figure 3

Univariate and multivariate count Gaussianity. a Univariate Gaussianity of counts U [ 0 , T ] V 1 (ψ)E[ U [ 0 , T ] V 1 (ψ)] for large bin size T=25 τ s . A solid black line represents the corresponding zero mean Gaussian fit. b Shapiro test p-value of the projections A= X i T (cos(θ),sin(θ)), for all θ[0,2π). c Probability density of D 2 (left). QQ-plot between the theoretically predicted χ 2 2 -quantiles and the empirical quantiles of D i 2 in Eq. (35). Figures b and c are both validations of the multivariate Gaussianity of the bivariate vector X in Eq. (34). In all panelsψ=0.3, τ s =1 ms, σ V i =1

Theorem 4.1 Let V i (t), i{1,2} be two processes satisfying Eq. (1), with covariance C V i j (τ)=E[ V i (0) V j (τ)] where i,j=1,2 and a common spiking threshold ψ. To take advantage of the available Gaussianity proofs that are typically derived for unit variance processes, we rescale the voltages V i (t) and the threshold ψ to obtain processes X i (t) with unit variance and unit variance of the derivatives. X i = V i ( C V i i ( 0 ) / | C V i i ( 0 ) | t ) C V i i ( 0 ) then has the correlation function c(τ)= C V i i ( C V i i ( 0 ) / | C V i i ( 0 ) | τ ) C V i i ( 0 ) and the spiking thresholds are ψ i =ψ/ C V i i ( 0 ) . The number of ψ i -level upcrossings in a time interval T for process X i is given by U [ 0 , T ] X i ( ψ i ). We assume that the following necessary and sufficient conditions hold. First, E{ ( U [ 0 , T ] X i ( ψ ) ) 2 }<. This holds if and only if c(τ) satisfies Geman’s Condition (see, e.g., Theorem  3 in [30]). Second, 0 | c ( n ) (τ)|dτ< where n{0,1,2} is the order of derivation. Then

1 T ( U [ 0 , T ] X 1 ( ψ 1 ) E [ U [ 0 , T ] X 1 ( ψ 1 ) ] U [ 0 , T ] X 2 ( ψ 2 ) E [ U [ 0 , T ] X 2 ( ψ 2 ) ] ) T d N ( ( 0 0 ) , ( a 11 a 12 a 12 a 22 ) ) ,
(26)

where the count covariances a i j with i,j{1,2} are three convergent series. Each is then given by 0< a i i = q = 1 σ X i 2 (q) and 0< a 12 = q = 1 σ X 1 , X 2 (q), both of which are finite. The first two terms in these series for ψ= ψ i , C V 11 (0)= C V 22 (0) are

σ X 1 2 (1)=2ϕ ( ψ ) 2 [ ψ 2 2 π 0 c ( s ) d s 1 4 0 c ( s ) d s ] ,
(27)
σ X 1 2 ( 2 ) = 2 ϕ ( ψ ) 2 [ 1 2 π ( ψ 2 1 ) 2 0 c ( s ) 2 d s + ( ( 1 π 1 4 ) ψ 2 1 π ) 0 c ( s ) 2 d s
(28)
+ 1 2 π 0 c ( s ) 2 ds 1 4 ψ 2 0 c(s) c (s)ds],
(29)

and the first and second order covariances are

σ X 1 , X 2 (1)=2ϕ ( ψ ) 2 r [ ψ 2 2 π 0 c ( s ) d s 1 4 0 c ( s ) d s ] ,
(30)
σ X 1 , X 2 ( 2 ) = 2 ϕ 2 ( ψ ) r 2 [ 1 2 π ( ψ 2 1 ) 2 0 c ( s ) 2 d s + ( ( 1 π 1 4 ) ψ 2 1 π ) 0 c ( s ) 2 d s
(31)
+ 1 2 π 0 c ( s ) 2 ds 1 4 ψ 2 0 c(s) c (s)ds].
(32)

We note that only for r=1 we obtain σ X 1 , X 2 (j)= σ X 1 2 (j). The asymptotic Pearson correlation coefficient ρ T , defined by

lim T ρ T = Cov ( U [ 0 , T ] X 1 ( ψ 1 ) , U [ 0 , T ] X 2 ( ψ 2 ) ) Var ( U [ 0 , T ] X 1 ( ψ 1 ) ) Var ( U [ 0 , T ] X 2 ( ψ 2 ) ) = a 12 a 11 a 22 ,
(33)

will also converge to the respective ratio of the asymptotic covariances and variances.

4.1 Numerical Confirmation of Joint Gaussianity and Limit Covariances a i j

In the last section we showed that spike counts of two neurons in large bins approach a bivariate Gaussian distribution with finite variances. Here, we illustrate this theoretical result in simulations of level-crossing processes. We choose two methods based on linear combinations and the Mahalanobis distance to numerically confirm joint Gaussianity. First, we numerically confirm the joint Gaussianity by showing that all linear combinations of two simulated spike counts for a large bin are Gaussian. We consider a vector

X:= ( 1 T ( U [ 0 , T ] X 1 ( ψ ) E [ U [ 0 , T ] X 1 ( ψ ) ] ) , 1 T ( U [ 0 , T ] X 2 ( ψ ) E [ U [ 0 , T ] X 2 ( ψ ) ] ) )
(34)

where U [ 0 , T ] X 1 (ψ), U [ 0 , T ] X 2 (ψ) and E[ U [ 0 , T ] X 1 (ψ)], E[ U [ 0 , T ] X 2 (ψ)] are the spike counts and their average in neuron 1 and 2, respectively. X consists of N × two-dimensional samples. N denotes the number of sample realizations and i[1,N] the consecutive sample number. We project each two-dimensional element X i in all directions (cos(θ),sin(θ)) by calculating the scalar product A= X i T (cos(θ),sin(θ)), with θ[0,2π). Subsequently, we apply a Shapiro test on A to verify Gaussianity for all univariate projections. The p-value of this Shapiro test conveys the certainty with which the Gaussian hypothesis cannot be rejected. As a second numerical test of joint Gaussianity we use the Mahalanobis distance. This test is based on the fact that if X N d (μ,Σ), where d is the dimensionality, μ is the mean and Σ the standard deviation, then the Mahalanobis distance D 2 with entries D i 2

D i 2 = { ( X i μ ) Σ 1 ( X i μ ) , i = 1 , , n } ,
(35)

is distributed according to a χ d 2 -distribution with d degrees of freedom (see, e.g., Sect. 3.1.4 and Eq. (3.16) in [4]). By numerically estimating the count sample average μ and Σ we calculate in our case D i 2 and compare it with a χ 2 2 -distribution, using the QQ-plot method (see Fig. 3(c)).

Figure 3 demonstrates the results of the joint Gaussianity tests for a bin size T=25 τ s , where τ s =1 ms. Figure 3(a) shows the empirical univariate distribution of spike counts in one level-crossing process derived from N=10000 independent count realizations. Figure 3(b) demonstrates that in N=10000 independent count realizations of X p-values for all θ are above the 10 % significance level. Figure 3(c) (left) illustrates that the Mahalanobis distance D 2 of a two-dimensional spike count variable X are well approximated by the χ 2 2 -distribution (solid line). Figure 3(c) (right) demonstrates in a QQ-plot of the empirically measured D 2 -quantiles and the theoretical χ 2 2 -quantiles that they are linearly related. This is an indication that both distributions are equal.

Figure 4 addresses the numerical confirmation of the constant asymptotic covariances a i j introduced in Theorem 4.1 in Eq. (26). We choose ψ=0.3, τ s =1, for N=5000 independent count realizations. To numerically compute the covariances a i j from spike count simulations we used the covariance-matrix estimator proposed by [31]. Figure 4(a) demonstrates the convergence of covariance a 12 to a finite value that is predicted by Theorem 4.1. The asymptotic large T limit is denoted by a brief colored horizontal line. Figure 4(b) demonstrates the dependence of this asymptotic limit on the correlation strength r. As expected, we find that the asymptotic estimated covariance a 12 is close to zero for r=0 and it is close to the variance for r=1. Figure 4(c) demonstrates that the interplay between covariances a 12 and variances a 11 and a 22 leads to a sublinear dependence of the asymptotic correlation coefficient ρ T = a 12 /( a 11 a 22 ) (Eq. (33)) in the large time bin limit, T=25 τ s .

Fig. 4
figure 4

Finite asymptotic limit of covariances a i j and correlation coefficient ρ T . a Count covariance a 12 = q = 1 σ X 1 , X 2 2 (q) vs. time bin T for varying correlation strengths r in the case of two identical neurons. The covariances converge for increasing time bin T towards the value predicted in Theorem 4.1 (indicated by small, thick lines). Here τ s =1, and ψ=0.3. b Limit value of the covariance a 12 vs. r for T=20 τ s (from a) vs. r. The case of r=1 (black) corresponds to the variance a 11 5.5 10 3 , indicated by the dashed horizontal line. c Correlation coefficient ρ T vs. r for large time bins (T=20 τ s ). The dashed line indicates the equality line

5 Conclusions

Level-crossing phenomena occur in a variety of physical and biological sciences. In many of these situations coordination between level crossings of multiple cross-correlated Gaussian processes is of interest. Here, we focused on neuroscience and modeled the spikes of two cross-correlated neurons by two cross-correlated level-crossing processes. While crossings and extrema of one level-crossing process have been the focus of mathematical research, results describing the coordination of multiple level-crossing processes are sparse and typically available only in specific and limited cases. Limits where level-crossing cross-correlations have been previously calculated are the weak and strong input correlation limit [3]. Here, we studied the case of two cross-correlated upward crossing processes and derived closed-form expressions for their joint level-crossing coordination as well their joint count Gaussianity. Importantly, the results we present in this article are consistent with previously reported limits but we now extended and generalized them. The two main results of our article are (1) closed-form explicit solution of the level-crossing cross-correlations and (2) the joint Gaussian limit of level-crossing counts. Our first result provides an explicit solution to ν cond (τ)= s 1 (t) s 2 (t+τ)/ ν 1 ν 2 that is valid for all correlation strengths and which comprises previously obtained limits, see discussion in Sect. 3. The rate of level crossings by a one-dimensional Gaussian process is given by the prominent Rice’s equation derived by Rice in the 1950s [11]. The solution we obtained for the level-crossing cross-correlation ν cond (τ) extends the Rice rate to the joint rate of two correlated processes. Our second result proves the joint Gaussianity of level crossings for large bin sizes. The joint Gaussianity of spike counts is a highly desired property because if and only if two level-crossing counts are jointly Gaussian can zero count cross-correlation imply statistical independence. Notably, marginal Gaussianity of spike counts in each neuron combined with zero count cross-correlation is not sufficient to imply independence. Contrasting examples of where X and Y variables are both marginally but not jointly Gaussian, have a zero cross-correlation but are not independent can be found in Sect. 5 in [8]. Count covariance and measures derived from it, such as the Pearson correlation coefficient, are computationally inexpensive and widely used as measures of statistical interdependencies [8]. Therefore, it is highly desirable to investigate the joint Gaussianity of level counts and thereby delimit the parameter space and mathematical conditions ensuring that independence can be implied from zero correlation coefficient. Notably, the joint Gaussianity of spike counts in bins of size T where T is much larger than the intrinsic time constant τ s (T τ s ) also implies that models of multi-neuronal dynamics only need to consider the mean and variance of spike counts because all higher cumulants are zero.

6 Materials and Methods

6.1 Proof of Proposition 3.1

For simplicity of notation we adopt the following convention: ( X 1 , Y 1 , X 2 , Y 2 ) denotes the vector ( V 1 (0), V 1 (0), V 2 (τ), V 2 (τ)). In order to calculate s 1 (t) s 2 (t+τ), defined in (14), we use the following regression model:

{ Y 1 = α 1 X 1 + α 2 X 2 + ϵ 1 , Y 2 = β 1 X 1 + β 2 X 2 + ϵ 2 ,

where ( ϵ 1 , ϵ 2 ) and ( X 1 , X 2 ) are independent. We use the covariance matrix in (4) and obtain

α 1 = Σ 14 ( τ ) Σ 13 ( τ ) σ X 1 2 σ X 2 2 Σ 13 ( τ ) 2 , α 2 = Σ 14 ( τ ) σ V 1 2 σ X 1 2 σ X 2 2 Σ 13 ( τ ) 2 , β 1 = σ X 2 2 Σ 14 ( τ ) σ X 1 2 σ X 2 2 Σ 13 ( τ ) 2 , β 2 = Σ 13 ( τ ) Σ 14 ( τ ) σ X 1 2 σ X 2 2 Σ 13 ( τ ) 2 .

The conditional distribution L( Y 1 , Y 2 | X 1 =ψ, X 2 =ψ) is a bivariate Gaussian distribution

N ( ψ ( α 1 + α 2 ) ψ ( β 1 + β 2 ) , ( σ ϵ 1 ( τ ) 2 Cov ( ϵ 1 , ϵ 2 ) ( τ ) Cov ( ϵ 1 , ϵ 2 ) ( τ ) σ ϵ 2 ( τ ) 2 ) ) .
(36)

Using the regression system above, we write the conditional expectation

E [ Y 1 1 { Y 1 [ 0 , ) } Y 2 1 { Y 2 [ 0 , ) } | X 1 = ψ , X 2 = ψ ] = E [ 1 { ϵ 1 [ ψ ( α 1 + α 2 ) , ] } 1 { ϵ 2 [ ψ ( β 1 + β 2 ) , ] } ( ψ 2 ( α 1 + α 2 ) ( β 1 + β 2 ) + ϵ 1 ψ ( β 1 + β 2 ) + ϵ 2 ψ ( α 1 + α 2 ) + ϵ 1 ϵ 2 ) ] = σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ( a b E [ 1 { Z 1 [ a , ) } 1 { Z 2 [ b , ) } ] b E [ Z 1 1 { Z 1 [ a , ) } 1 { Z 2 [ b , ) } ] a E [ Z 2 1 { Z 1 [ a , ) } 1 { Z 2 [ b , ) } ] + E [ Z 1 1 { Z 1 [ a , ) } Z 2 1 { Z 2 [ b , ) } ] ) ,
(37)

where Z 1 = ϵ 1 σ ϵ 1 ( τ ) , Z 2 = ϵ 2 σ ϵ 2 ( τ ) , a= ( ψ ( α 1 + α 2 ) ) σ ϵ 1 ( τ ) , b= ( ψ ( β 1 + β 2 ) ) σ ϵ 2 ( τ ) . Now, we calculate the four different expectations in Eq. (37) using Mehler’s Formula (Lemma 10.7 in [2]). First, we write

E[ Z 1 1 { Z 1 [ a , ) } Z 2 1 { Z 2 [ b , ) } ]= n = 0 c n (a) c n (b)n! ( Cov ( ϵ 1 , ϵ 2 ) ( τ ) σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n ,
(38)

where c n (a) and c n (b) are the Hermite coefficients associated with the expectation E[ Z 1 1 { Z 1 [ a , ) } Z 2 1 { Z 2 [ b , ) } ]. Then these Hermite coefficients are given by

c n (a)= ( 1 ) n n ! 2 π a z d n d z n ( e z 2 / 2 ) dz, c n (b)= ( 1 ) n n ! 2 π b z d n d z n ( e z 2 / 2 ) dz.

In particular,

  • for n=0, c 0 (a)= e a 2 / 2 2 π =ϕ(a), c 0 (b)=ϕ(b),

  • for n=1, c 1 (a)=aϕ(a)Φ(a)+1, c 1 (b)=bϕ(b)Φ(b)+1,

  • for n2, c n (a)= ϕ ( a ) n ! (a H n 1 (a)+ H n 2 (a)), c n (b)= ϕ ( b ) n ! (b H n 1 (b)+ H n 2 (b)).

Analogously we obtain

E[ 1 { Z 1 [ a , ) } 1 { Z 2 [ b , ) } ]= n = 0 c n (a) c n (b)n! ( Cov ( ϵ 1 , ϵ 2 ) ( τ ) σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n ,
(39)

with

c n (a)= ( 1 ) n n ! 2 π a d n d z n ( e z 2 / 2 ) dz, c n (b)= ( 1 ) n n ! 2 π b d n d z n ( e z 2 / 2 ) dz.

In particular,

  • for n=0, c 0 (a)=1Φ(a)= Φ ¯ (a), c 0 (b)= Φ ¯ (b),

  • for n=1, c 1 (a)=ϕ(a), c 1 (b)=ϕ(b),

  • for n2, c n (a)= ϕ ( a ) n ! H n 1 (a), c n (b)= ϕ ( b ) n ! H n 1 (b).

We also have

E[ 1 { Z 1 [ a , ) } Z 2 1 { Z 2 [ b , ) } ]= n = 0 c n (a) c n (b)n! ( Cov ( ϵ 1 , ϵ 2 ) ( τ ) σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n ,
(40)

with

c n (a)= ( 1 ) n n ! 2 π a d n d z n ( e z 2 / 2 ) dz, c n (b)= ( 1 ) n n ! 2 π b z d n d z n ( e z 2 / 2 ) dz.

In particular,

  • for n=0, c 0 (a)= Φ ¯ (a), c 0 (b)=ϕ(b),

  • for n=1, c 1 (a)=ϕ(a), c 1 (b)=bϕ(b)Φ(b)+1,

  • for n2, c n (a)= ϕ ( a ) n ! H n 1 (a), c n (b)= ϕ ( b ) n ! (b H n 1 (b)+ H n 2 (b)).

Finally,

E[ Z 1 1 { Z 1 [ a , ) } 1 { Z 2 [ b , ) } ]= n = 0 c n (a) c n (b)n! ( Cov ( ϵ 1 , ϵ 2 ) ( τ ) σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n ,
(41)

with

c n (a)= ( 1 ) n n ! 2 π a z d n d z n ( e z 2 / 2 ) dz, c n (b)= ( 1 ) n n ! 2 π b d n d z n ( e z 2 / 2 ) dz.

In particular,

  • for n=0, c 0 (a)=ϕ(a), c 0 (b)= Φ ¯ (b),

  • for n=1, c 1 (a)=aϕ(a)Φ(a)+1, c 1 (b)=ϕ(b),

  • for n2, c n (a)= ϕ ( a ) n ! (a H n 1 (a)+ H n 2 (a)), c n (b)= ϕ ( b ) n ! H n 1 (b).

Then, combining the four expectations (38)–(41), the associated Hermite coefficients, and Eq. (37), we obtain

E [ Y 1 1 { Y 1 [ 0 , ) } Y 2 1 { Y 2 [ 0 , ) } | X 1 = ψ , X 2 = ψ ] = ( b Φ ¯ ( b ) ϕ ( b ) ) ( a Φ ¯ ( a ) ϕ ( a ) ) σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) + Φ ¯ ( a ) Φ ¯ ( b ) Cov ( ϵ 1 , ϵ 2 ) ( τ ) + n 2 Cov ( ϵ 1 , ϵ 2 ) ( τ ) n n ! ( σ ϵ 1 ( τ ) σ ϵ 2 ( τ ) ) n 1 ( ϕ ( a ) ϕ ( b ) H n 2 ( a ) H n 2 ( b ) ) .
(42)

Note that the first two terms in Eq. (42) correspond to orders n=0 and n=1, respectively. Denoting E[ Y 1 1 { Y 1 [ 0 , ) } Y 2 1 { Y 2 [ 0 , ) } | X 1 =ψ, X 2 =ψ]= C ( a , b ) (τ) we find s 1 (t) s 2 (t+τ)= C ( a , b ) (τ) p τ (ψ,ψ). Here, C ( a , b ) (τ) is a uniformly convergent series.  □

6.2 Zero Time Lag Correlations

In this section we derive the zero lag spike correlation using the Gaussian probability integrals in Eq. (15). Following the previously introduced notation the spiking threshold level in a neurons is ψ and variance σ V i we can write

ν cond ( τ ) = s 1 ( t ) s 2 ( t ) / ( ν 1 ν 2 ) = 1 ν 1 ν 2 0 0 V ˙ 1 V ˙ 2 ( t + τ ) p τ ( ψ , ψ , V ˙ 1 , V ˙ 2 ) d V ˙ 1 d V ˙ 2 ( τ ) .

Now, we substitute the variables:

Σ = ψ ( σ V 1 + σ V 2 ) 2 σ V 1 σ V 2 ( r σ V 1 σ V 2 + σ V 1 σ V 2 ) , Σ ˙ = σ V ˙ 2 / σ V ˙ 1 V ˙ ( t ) + σ V ˙ 1 / σ V ˙ 2 V ˙ ( t + τ ) 2 σ V ˙ 1 σ V ˙ 2 r σ V 1 σ V 2 c ( τ ) , Δ = ψ ( σ V 2 σ V 1 ) 2 σ V 1 σ V 2 ( σ V 1 σ V 2 r σ V 1 σ V 2 ) , Δ ˙ = σ V ˙ 2 / σ V ˙ 1 V ˙ ( t ) σ V ˙ 1 / σ V ˙ 2 V ˙ ( t + τ ) 2 σ V ˙ 1 σ V ˙ 2 + r σ V 1 σ V 2 c ( τ ) .

The correlation matrix C Σ , Δ ˙ , Σ ˙ , Δ for τ=0 is a four-dimensional identity matrix.

ν cond ( 0 ) = σ V ˙ 1 2 σ V ˙ 2 2 ν 1 ν 2 1 r 2 8 π 2 ( 1 r 2 ) σ V 1 σ V 2 σ V ˙ 1 σ V ˙ 2 × [ exp ( Δ ˙ 2 ( 1 r ) 2 ( 1 + r ) ) | Δ ˙ | ( 1 r 2 ) + π 2 [ ( 1 + r ) Δ ˙ 2 ( 1 r ) ] Erfc ( | Δ ˙ ( 1 r ) 2 ( 1 + r ) | ) ] exp ( ψ 2 ( σ V 1 + σ V 2 ) 2 4 σ V 1 σ V 2 ( r σ V 1 σ V 2 + σ V 1 σ V 2 ) ψ 2 ( σ V 2 σ V 1 ) 2 4 σ V 1 σ V 2 ( σ V 1 σ V 2 r σ V 1 σ V 2 ) Δ ˙ 2 2 ) d Δ ˙ .
(43)

Solving this integral we obtain Eq. (23).

6.3 Proof of Theorem 4.1

To prove joint Gaussianity, it is instructive to briefly recapitulate the one-dimensional Central Limit Theorem. We start by writing

U [ 0 , T ] X i ( ψ 1 )= j = 0 k = 0 d j ( i ) ( ψ i ) a k 0 T H j ( X i ( s ) ) H k ( X i ( s ) ) ds,
(44)

with d j ( i ) ( ψ i )= 1 j ! ϕ( ψ i ) H j ( ψ i ) and a k = 1 k ! 0 x H k (x)ϕ(x)dx. Defining the level count deviation for neuron i by S i (T) we obtain

S i ( T ) = 1 T ( U [ 0 , T ] X i ( ψ i ) E [ U [ 0 , T ] X i ( ψ ) ] ) = q = 1 1 T 0 T k + j = q d j ( i ) ( ψ i ) a k H j ( X i ( s ) ) H k ( X i ( s ) ) d s
(45)
= q = 1 1 T 0 T G q i ( X i ( s ) , X i ( s ) ) ds
(46)
= q = 1 J q i ( T , X i , X i ) ,
(47)

where G q i ( x 1 , x 2 )= k + j = q d j ( i ) ( ψ i ) a k H j ( x 1 ) H k ( x 2 ). A Gaussian distribution is a stable limit distribution for a sum of independent finite variance variables. Therefore, all that is left to prove is that contributions q q are independent and have finite variance. From Mehler’s Formula we recognize that the contributions for q q are independent. The finite variance follows from the observation that for all q the variance of G q i ( X i (s), X i (s)) is proportional to the expectation of a product of four Hermite polynomials, which has been proven to be finite (Theorem 10.10 in [2]) if the conditions of Theorem 4.1 are satisfied.

Now, we address the joint Gaussianity. A random vector is jointly Gaussian if and only if any linear combination of its components has a univariate normal distribution (see [4, 32, 33]). Thus, we need to prove that the sequence α 1 S 1 (T)+ α 2 S 2 (T) is asymptotically Gaussian and satisfies a Central Limit Theorem, for all α 1 , α 2 R. We start from Eq. (45) and consider a truncated series for S Q i (T)= q = 1 Q J q 1 (T, X 1 , X 1 ) consisting of the first Q terms and denote by the norm in L 2 (Ω). First, we show that the remainder is bounded

lim T i = 1 2 α i S i ( T ) i = 1 2 α i S Q i ( T ) | α 1 | q = Q + 1 σ X 1 2 ( q ) + | α 2 | q = Q + 1 σ X 2 2 ( q ) .
(48)

As Q grows, this difference diminishes such that if α 1 S Q 1 (T)+ α 2 S Q 2 (T) is Gaussian for large Q then this will imply that α 1 S 1 (T)+ α 2 S 2 (T) is Gaussian, too. Now, we only need to show Gaussianity of α 1 S Q 1 (T)+ α 2 S Q 2 (T). Using a modified version of the Breuer–Major Theorem (Sect. 6.4), we know that for each q

α 1 J q 1 ( T , X 1 , X 1 ) + α 2 J q 2 ( T , X 2 , X 2 ) T d N ( 0 , σ G ˜ q 2 ) ,

where σ G ˜ q 2 is given in Eq. (51). The same theorem implies that α 1 J q 1 (T, X 1 , X 1 )+ α 2 J q 2 (T, X 2 , X 2 ) and α 1 J q 1 (T, X 1 , X 1 )+ α 2 J q 2 (T, X 2 , X 2 ) are asymptotically independent if q q . Thus we obtain for any Q1

α 1 S Q 1 (T)+ α 2 S Q 2 (T) T d N ( 0 , q = 1 Q σ G ˜ q 2 ) .

To calculate the count covariances in Eq. (26) we use Lemma 10.7 in [2] and obtain

a i j = 2 q = 1 k = 0 q k = 0 q d q k ( 1 ) ( ψ i ) d q k ( 2 ) ( ψ j ) a k a k × 0 E [ H q k ( X i ( 0 ) ) H k ( X i ( 0 ) ) H q k ( X j ( s ) ) H k ( X j ( s ) ) ] d s .
(49)

Applying the Mehler Formula to a four-dimensional Gaussian random vector (Lemma 10.7 in [2]) we find

E [ H q k ( X i ( 0 ) ) H k ( X i ( 0 ) ) H q k ( X j ( s ) ) H k ( X j ( s ) ) ] = { 0 , if  q q , ( r q ) 1 δ i j ( d 1 , d 2 , d 3 , d 4 ) Z ( q k ) ! k ! ( q k ) ! k ! d 1 ! d 2 ! d 3 ! d 4 ! c ( s ) d 1 c ( s ) d 2 ( c ( s ) ) d 3 ( c ( s ) ) d 4 , if  q = q .
(50)

Here, δ i j is the Kronecker delta, Z is the set of d i ’s satisfying: d i 0, d 1 + d 2 =qk, d 3 + d 4 =k, d 1 + d 3 =q k , and d 2 + d 4 = k . We thus can express the covariances of the spike counts as

a i j = 2 q = 1 ( r q ) 1 δ i j k = 0 q k = 0 q d q k ( 1 ) ( ψ ) d q k ( 2 ) ( ψ ) a k a k × 0 d 1 , d 2 , d 3 , d 4 Z ( q k ) ! k ! ( q k ) ! k ! d 1 ! d 2 ! d 3 ! d 4 ! c ( s ) d 1 c ( s ) d 2 ( c ( s ) ) d 3 ( c ( s ) ) d 4 d s .

This is the result reported in Theorem 4.1.  □

6.4 Modified Breuer–Major Theorem

Here, we adapt the Breuer–Major Theorem [34] to show that the bivariate vector ( J q 1 (T, X 1 , X 1 ), J q 2 (T, X 2 , X 2 )) is Gaussian.

Theorem 6.1 We consider two zero mean and unit variance Gaussian processes X i (t), which are described by the properties in Sect. 2 and a correlation function E[ X i (0) X j (t)]= c i j (t), where i,j{1,2}. For all functions G i (,) that fulfill E[ G i ( X i (0), X i (0))]=0 and E[ G i 2 ( X i (0), X i (0))]< and two real constants α i the following integral is Gaussian and we have

1 T 0 T ( α 1 G 1 ( X 1 ( t ) , X 1 ( t ) ) + α 2 G 2 ( X 2 ( t ) , X 2 ( t ) ) ) dt T d N ( 0 , σ G 2 ) ,

where

σ G 2 = 2 α 1 2 0 E [ G 1 ( X 1 ( 0 ) , X 1 ( 0 ) ) G 1 ( X 1 ( t ) , X 1 ( t ) ) ] d t + 2 α 2 2 0 E [ G 2 ( X 2 ( 0 ) , X 2 ( 0 ) ) G 2 ( X 2 ( t ) , X 2 ( t ) ) ] d t + 4 α 1 α 2 0 E [ G 1 ( X 1 ( 0 ) , X 1 ( 0 ) ) G 2 ( X 2 ( t ) , X 2 ( t ) ) ] d t .
(51)

Proof We start by considering the Hermite expansion of the functions G i (,) and write

G i ( X 1 ( t ) , X 2 ( t ) ) = lim Q q = 1 Q k 1 + k 2 = q c k 1 k 2 , G i H k 1 ( X 1 ( t ) ) H k 2 ( X 2 ( t ) )
(52)
= lim Q G i Q ,
(53)

where G i Q denotes the sum over q in Eq. (52) truncated at Q, and the convergence is in L 2 (Ω). We write

1 T 0 T α 1 G 1 ( X 1 ( t ) , X 1 ( t ) ) + α 2 G 2 ( X 2 ( t ) , X 2 ( t ) ) dt
(54)
= lim Q 1 T 0 T ( α 1 G 1 Q ( X 1 ( t ) , X 1 ( t ) ) + α 2 G 2 Q ( X 2 ( t ) , X 2 ( t ) ) ) dt.
(55)

Now, to prove the Gaussianity of Eq. (54) it is sufficient to prove the asymptotic Gaussianity of ( α 1 G 1 Q ( X 1 (t), X 1 (t))+ α 2 G 2 Q ( X 2 (t), X 2 (t))). This is sufficient because

K Q (T)= 1 T 0 T [ i = 1 2 α i ( G i ( X i ( t ) , X i ( t ) ) G i Q ( X i ( t ) , X i ( t ) ) ) ] d t
(56)
i = 1 2 | α i | 1 T 0 T [ ( G i ( X i ( t ) , X i ( t ) ) G i Q ( X i ( t ) , X i ( t ) ) ) ] d t ,
(57)

where is the norm in L 2 (Ω). Each of the terms is bounded,

1 T 0 T [ G i ( X i ( t ) , X i ( t ) ) G i Q ( X i ( t ) , X i ( t ) ) ] d t 2 2 Q + 1 0 a ( 1 t T ) k 1 + k 2 = q c k 1 k 2 , G i 2 k 1 ! k 2 !
(58)
+ Q + 1 a T ( 1 t T ) ψ ( t ) Q k 1 + k 2 = q c k 1 k 2 , G i 2 k 1 ! k 2 !
(59)
a Q + 1 k 1 + k 2 = q c k 1 k 2 , G i 2 k 1 ! k 2 ! + Q + 1 a T ( 1 t T ) ψ ( t ) Q k 1 + k 2 = q c k 1 k 2 , G i 2 k 1 ! k 2 ! ,
(60)

where ψ(t)=max(| c 11 (t)|+| c 12 (t)|,| c 21 (t)|+| c 22 (t)|) and a is a positive real number such that ψ(t)<1 whenever t>a. Since Eq. (60) is a vanishing series, lim Q lim T K Q (T)=0. Now, we address the Gaussianity of ( α 1 G 1 Q ( X 1 (t), X 1 (t))+ α 2 G 2 Q ( X 2 (t), X 2 (t))). A result of Kratz and León, Lemma 3, p. 653 in [35], implies that

L i (T)= 1 T 0 T G i Q ( X i ( t ) , X i ( t ) ) dt

can be approached in L 2 (Ω) by the sequence of ϵ-dependent processes X i ε (t) L i , ε (T)= 1 T 0 T G i Q ( X i ε (t), ( X i ε ) (t))dt. The 1 ε -dependent processes for i=1,2, are defined as

X i ε (t)= e i t λ ( f V β ε ( λ ) ) 1 / 2 ( 1 r d W j ( λ ) + r d W 3 ( λ ) ) ,
(61)

where denotes a convolution, β ε is β ε (t)= 1 ε β( t ε ), β being a positive function with | λ | j |β(λ)dλ<, j=1,2 and such that its Fourier transform has support in [1,1], where E[ X i ε (0) X i ε (τ)]= c j j (τ) β ˆ (ετ) these processes are 1 ε -dependent. Then

1 T 0 T i = 1 2 α i G i Q ( X i ( t ) , X i ( t ) ) d t α i L i , ϵ ( T ) i = 1 2 | α i | L i ( T ) L i , ϵ ( T ) ,
(62)
lim ϵ 0 lim T i = 1 2 | α i | L i ( T ) L i , ϵ ( T ) =0.
(63)

This follows from the Central Limit Theorem for 1 ε -dependent random vectors and concludes the proof. □

Notes

  1. We provide the MATHEMATICA 8 (Wolfram Research) code to iteratively calculate ν cond (τ). The code can be found at: http://www.tchumatchenko.de/CodeNuCond_Fig2.nb.

References

  1. Blake IF, Lindsey WC: Level-crossing problems for random processes. IEEE Trans Inf Theory 1973, 19: 295–315. 10.1109/TIT.1973.1055016

    Article  MATH  MathSciNet  Google Scholar 

  2. Azaïs JM, Wschebor M: Level Sets and Extrema of Random Processes and Fields. Wiley, New York; 2009.

    Book  MATH  Google Scholar 

  3. Tchumatchenko T, Malyshev A, Geisel T, Volgushev M, Wolf F: Correlations and synchrony in threshold neuron models. Phys Rev Lett 2010., 104(5): Article ID 058102 Article ID 058102

    Google Scholar 

  4. McNeil AJ, Frey R, Embrechts P Princeton Series in Finance. In Quantitative Risk Management: Concepts, Techniques, and Tools. Princeton University Press, Princeton; 2005.

    Google Scholar 

  5. Ehrenfeld S, Goodman NR, Kaplan S, Mehr E, Pierson WJ, Stevens R, Tick LJ: Theoretical and observed results for the zero and ordinate crossing problems of stationary Gaussian noise with application to pressure records of ocean waves. Technical report. New York University, College of Engineering; 1958.

    Google Scholar 

  6. Burak Y, Lewallen S, Sompolinsky H: Stimulus-dependent correlations in threshold-crossing spiking neurons. Neural Comput 2009, 21(8):2269–2308. 10.1162/neco.2009.07-08-830

    Article  MATH  MathSciNet  Google Scholar 

  7. Destexhe A, Rudolph M, Pare D: The high-conductance state of neocortical neurons in vivo . Nat Rev Neurosci 2003, 4: 739–751. 10.1038/nrn1198

    Article  Google Scholar 

  8. Embrechts P, McNeil A, Straumann D: Correlation and dependence in risk management: properties and pitfalls. In Risk Management: Value at Risk and Beyond. Cambridge University Press, Cambridge; 2000:176–223.

    Google Scholar 

  9. Dayan P, Abbott LF: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, Cambridge; 2001.

    Google Scholar 

  10. Tchumatchenko T, Geisel T, Volgushev M, Wolf F: Signatures of synchrony in pairwise count correlations. Front Comput Neurosci 2010. 10.3389/neuro.10.001.2010

    Google Scholar 

  11. Rice SO: Mathematical analysis of random noise. In Selected Papers on Noise and Stochastic Processes. Edited by: Wax N. Dover, New York; 1954.

    Google Scholar 

  12. Brunel N, van Rossum M: Lapicque’s 1907 paper: from frogs to integrate-and-fire. Biol Cybern 2007, 97(5):337–339.

    Article  MATH  MathSciNet  Google Scholar 

  13. Burkitt A: A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol Cybern 2006, 95: 1–19. 10.1007/s00422-006-0068-6

    Article  MATH  MathSciNet  Google Scholar 

  14. Burkitt A: A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties. Biol Cybern 2006, 95: 97–112. 10.1007/s00422-006-0082-8

    Article  MATH  MathSciNet  Google Scholar 

  15. Fourcaud N, Brunel N: Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput 2002, 14: 2057–2110. 10.1162/089976602320264015

    Article  MATH  Google Scholar 

  16. Badel L: Firing statistics and correlations in spiking neurons: a level-crossing approach. Phys Rev E 2011., 84: Article ID 041919 Article ID 041919

    Google Scholar 

  17. Genz A, Bretz F: Computation of Multivariate Normal and t Probabilities. Springer, Berlin; 2009.

    Book  MATH  Google Scholar 

  18. de la Rocha J, Doiron B, Shea-Brown E, Josic K, Reyes A: Correlation between neural spike trains increases with firing rate. Nature 2007, 448(16):802–807.

    Article  Google Scholar 

  19. Shea-Brown E, Josić K, de la Rocha J, Doiron B: Correlation and synchrony transfer in integrate-and-fire neurons: basic properties and consequences for coding. Phys Rev Lett 2008., 100(10): Article ID 108102 Article ID 108102

  20. Ostojic S, Brunel N, Hakim V: How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J Neurosci 2009, 29(33):10234–10253. 10.1523/JNEUROSCI.1275-09.2009

    Article  Google Scholar 

  21. Binder MD, Powers RK: Relationship between simulated common synaptic input and discharge synchrony in cat spinal motoneurons. J Neurophysiol 2001, 86(5):2266–2275.

    Google Scholar 

  22. Lampl I, Reichova I, Ferster D: Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron 1999, 22: 361–374. 10.1016/S0896-6273(00)81096-X

    Article  Google Scholar 

  23. Vilela RD, Lindner B: Comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation. Phys Rev E 2009., 80(3): Article ID 031909 Article ID 031909

  24. Greenberg DS, Houweling AR, Kerr JND: Population imaging of ongoing neuronal activity in the visual cortex of awake rats. Nat Neurosci 2008, 11(7):749–751. 10.1038/nn.2140

    Article  Google Scholar 

  25. Tetzlaff T, Rotter S, Stark E, Abeles M, Aertsen A, Diesmann M: Dependence of neuronal correlations on filter characteristics and marginal spike train statistics. Neural Comput 2008, 20: 2133–2184. 10.1162/neco.2008.05-07-525

    Article  MATH  MathSciNet  Google Scholar 

  26. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris K: The asynchronous state in cortical circuits. Science 2010, 327: 587–590. 10.1126/science.1179850

    Article  Google Scholar 

  27. Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NM, Tolias AS: Decorrelated neuronal firing in cortical micro circuits. Science 2010, 327(584):584–587.

    Article  Google Scholar 

  28. Pernice V, Staude B, Cardanobile S, Rotter S: How structure determines correlations in neuronal networks. PLoS Comput Biol 2011., 7(5): Article ID e1002059 Article ID e1002059 10.1371/journal.pcbi.1002059

  29. Zohary E, Shadlen MN, Newsome WT: Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 1994, 370: 140–143. 10.1038/370140a0

    Article  Google Scholar 

  30. Kratz M, León J: Level curves crossings and applications for Gaussian models. Extremes 2010, 13: 315–351. 10.1007/s10687-009-0090-x

    Article  MATH  MathSciNet  Google Scholar 

  31. Doukhan P, Jakubowicz J, León JR: Variance estimation with applications. In Dependence in Probability, Analysis and Number Theory. Kendrick Press, Heber City; 2010:203–231.

    Google Scholar 

  32. Feller W 2. In An Introduction to Probability Theory and Its Applications. 2nd edition. Wiley, New York; 1971.

    Google Scholar 

  33. Hamedani GG, Tata MN: On the determination of the bivariate normal distribution from distributions of linear combinations of the variables. Am Math Mon 1975, 82(9):913–915.

    Article  MATH  MathSciNet  Google Scholar 

  34. Breuer P, Major P: Central limit theorems for non-linear functionals of Gaussian fields. J Multivar Anal 1983, 13(3):425–441. 10.1016/0047-259X(83)90019-2

    Article  MATH  MathSciNet  Google Scholar 

  35. Kratz M, León JR: Central limit theorems for level functionals of stationary Gaussian processes and fields. J Theor Probab 2001, 14(3):639–672. 10.1023/A:1017588905727

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the two anonymous referees, and Sabrina Münzberg, Amadeus Dettner, and Laurent Badel for constructive comments on a previous version of the paper, Sabrina Münzberg for help with Hermite polynomial calculations and associated problem solving and Sara Gil Mast for English corrections. EDB and JRL are supported by a ECOS-Nord project under the reference V12M01. TT is funded by the Volkswagen foundation and the Max Planck Society and is thankful for the support of the Center for Theoretical Neuroscience at Columbia University during her stay there.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tatjana Tchumatchenko.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The main idea of this paper was proposed jointly by EB, JL and TT. All authors contributed equally to the writing of this paper performed all the steps of the proofs in this research. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Di Bernardino, E., León, J. & Tchumatchenko, T. Cross-Correlations and Joint Gaussianity in Multivariate Level Crossing Models. J. Math. Neurosc. 4, 22 (2014). https://doi.org/10.1186/2190-8567-4-22

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-4-22

Keywords