 Research
 Open
 Published:
Shifting Spike Times or Adding and Deleting Spikes—How Different Types of Noise Shape Signal Transmission in Neural Populations
The Journal of Mathematical Neurosciencevolume 5, Article number: 1 (2015)
Abstract
We study a population of spiking neurons which are subject to independent noise processes and a strong common timedependent input. We show that the response of output spikes to independent noise shapes information transmission of such populations even when information transmission properties of single neurons are left unchanged. In particular, we consider two Poisson models in which independent noise either (i) adds and deletes spikes (AD model) or (ii) shifts spike times (STS model). We show that in both models suprathreshold stochastic resonance (SSR) can be observed, where the information transmitted by a neural population is increased with addition of independent noise. In the AD model, the presence of the SSR effect is robust and independent of the population size or the noise spectral statistics. In the STS model, the information transmission properties of the population are determined by the spectral statistics of the noise, leading to a strongly increased effect of SSR in some regimes, or an absence of SSR in others. Furthermore, we observe a highpass filtering of information in the STS model that is absent in the AD model. We quantify information transmission by means of the lower bound on the mutual information rate and the spectral coherence function. To this end, we derive the signal–output crossspectrum, the output power spectrum, and the crossspectrum of two spike trains for both models analytically.
1 Introduction
Neurons in the sensory periphery encode information about continuous timedependent signals in sequences of action potentials. Hereby, upon repeated presentation of a stimulus, the response of the neuron is not perfectly reproducible but exhibits trialtotrial variability. Processes, leading to such variability, are termed noise and can have various origins [1, 2]. How such noise processes affect the transmission of timedependent signals in neurons can be studied in the framework of information theory [3, 4]. Within this framework, it has been shown, for instance, that the presence of noise can enhance the transmission of weak (subthreshold) signals in single neurons and neural models [5–7], an effect known as stochastic resonance and also observed outside biology [8, 9]. At the level of neural population coding, noise can also have a beneficial role for the transmission of strong (suprathreshold) signals [10, 11] by means of suprathreshold stochastic resonance (SSR), the mechanism of which is quite distinct from that of conventional stochastic resonance despite the similarity in their naming. Additionally, noise not only impacts the total transmitted information, but it also affects which frequencies of the sensory signal are preferably encoded by a neural system. The suppression of information about the input signal in certain frequency bands can be regarded as a form of information filtering [12–16]. Put differently, we may ask whether the neural system is preferentially encoding slow (lowfrequency) components of a signal or fast (highfrequency) components of a signal, which can be quantified by the coherence function, as described below.
How noise affects information transmission in neural populations has been studied for a long time [11, 17, 18]. Of particular interest in the context of the information flow through a population are the correlations among neurons that have been observed in many experimental preparations, e.g. in the visual system [19–22], the somatosensory system [23], the olfactory system [24, 25], the barrel cortex of rats [26, 27], and in spinal motor neurons [[28], and references therein]. Such correlations, either in membrane potential, in output spikes, or in spike counts of two cells, can be caused by a common input to both cells due to overlapping receptive fields. For instance, in the electrosensory system [29], the spontaneous activity of different neurons in the absence of the signal is uncorrelated and is driven by independent noise processes. In other systems, the output correlations are not caused by a stimulus. For example, in tangential neurons of the fly visual system, already the noise processes are correlated and lead, even in the absence of the sensory signal, to a spontaneous spiking activity that is correlated across different neurons [20] (for a detailed discussion of the noise sources see [30]). Other examples of neurons receiving common noise input are ganglion cells of the primate retina [21] or the projection neurons of the Drosophila olfactory system [25]. In the present study, we consider ensembles of neurons receiving highly correlated noise input as sketched in Fig. 1.
We consider two theoretical models of neural populations that exhibit strong spike train correlations among the neurons within the population, even in the absence of a sensory signal. In this situation, we address the question of how the spike trains of different neurons may be decorrelated by independent noise processes and how this affects the transmission of a sensory signal. More specifically, we are interested in how independent noise influences the spikes of the output spike trains and study two extreme cases. In one case, we assume that independent noise adds and deletes spikes in the output spike trains (AD model) as illustrated in Fig. 2a. This is a likely effect of additional noise in an excitable neuron with low firing rate. In another case, we assume that independent noise shifts the spike times of the output spike trains (STS model) as illustrated in Fig. 2b. This scenario applies to neurons in a tonically firing regime, which generally do not fire with Poisson statistics. We construct the two models in such a way that they cannot be distinguished on a single neuron level. This allows us to ascribe any differences in the information transmission properties of the populations unambiguously to the different effects of the noise.
This work is organized as follows: First, we describe the methods by which we will study the effect of noise on signal transmission in a population of spiking neurons. Second, we introduce two models where independent noise either adds and deletes spikes, or shifts spike times in the output spike trains. In Sect. 4, we then derive the spectral statistics for the two models. These derivations can be skipped upon the first reading. In Sect. 5, we summarize the derived spectral statistics and proceed to study the effect of independent noise on information filtering and the total transmission of information in neural populations. We conclude with a summary and a discussion of our results in Sect. 6.
2 Methods
2.1 Spike Train Statistics & Ensemble Averages
In this paper, we study the transmission of a sensory timedependent signal by a population of spiking neurons, which is illustrated in Fig. 1. We model the output spike trains of single neurons by stochastic point processes. The output of the μ th stochastic point process can be described by the spike count . This function starts at 0 at and is incremented by 1 at each spike time , i.e. for , for , and so forth. Equivalently, the output of a stochastic point process can be described by the derivative of . This derivative is called the spike train and is given by a sum of delta functions,
We study information transmission properties of the population by quantifying the amount of information about the input signal encoded in the sum
of the individual output spike trains.
We take into account different sources of variability: common noise , independent noise sources , and the stochastic signal (cf. Fig. 1). Consequently, we can consider different ensemble averages, denoted by angular brackets . Subscripts indicate over which processes we average and the absence of subscripts implies averaging over all involved processes. In mathematical terms this notation corresponds to the expectation with respect to the conditional distribution that is indicated by the subscripts, e.g. stands for the expectation of the process with respect to the conditional distribution of ξ, conditioned on a realization of s and η, whereas stands for the total expectation. Note that is still a random process, unless a realization of s and η is fixed. Below, when analyzing correlation functions, e.g. Eq. (7), we will also consider averages over products of spike trains , which in mathematical terms corresponds to
This applies analogously to averages over the processes and .
The instantaneous firing rate
obtained by averaging the spike train only with respect to the common noise , will be an important quantity in our calculations. It still depends on the independent noise and the signal and is difficult to determine in experiments. More accessible is the average over all noise sources by repeated trials with a frozen stimulus and summation over all spike trains. In this way, we obtain (apart from a normalization factor ) the population rate
An example for a signal, spike trains, and the resulting population rate is shown in Fig. 3.
2.2 Information Transmission & Spectral Statistics
In the case of ergodic processes, the total amount of the information about a signal transmitted by the output can be quantified by the mutual information rateR [3], which is measured in bits per second. For Gaussian signals, a lower bound on the mutual information rate [4, 31, 32] is given by
The coherence function between the input signal and the output is calculated from secondorder spectral measures of input and output and is defined as
Here and are the summedspiketrain and signal power spectra, respectively, and is the signal–output crossspectrum. The numerical estimation of spectra follows standard procedures [33]. In our analytical calculations we will use the Wiener–Khinchin theorem [34]
that relates the spectra to the correlation functions in the time domain
The limit of large times in Eq. (7) ensures stationarity. For the summed spike train Eq. (2), the autocorrelation function with in Eq. (7), can be rewritten as
where is the spike train autocorrelation function and the crosscorrelation function between two spike trains. Analogously, the signal–output crosscorrelation function can be written as
where is the crossspectrum between the input signal and a single output spike train. Taking the Fourier transformation of Eqs. (8) and (9), using the Wiener–Khinchin theorem Eq. (6), and inserting the results into Eq. (5), yield the coherence function
From Eq. (10) we see that for the crossspectrum of two spike trains, , appears in the denominator of the coherence function and gains significance as N becomes larger. Therefore, an essential theoretical problem is to calculate this crossspectrum.
As outlined above, the coherence function allows one to estimate the total flow of information through the neural population. However, because enters in a monotonic fashion in Eq. (4), we can also regard the coherence as a frequencyresolved measure of information transfer. Reduction of the coherence in certain frequency bands can be regarded as a form of information filtering, which needs to be distinguished from power filtering. Hence, besides the lower bound , we will also inspect the frequency dependence of the coherence function.
3 Models
The models that we consider in this paper have the following assumptions in common:

(1)
Poisson statistics of spontaneous activity;

(2)
high correlations among neurons due to strong common noise input;

(3)
encoding of a sensory signal in the timedependent population rate.
For simplicity, we consider a linear encoding of a weak timedependent signal. This will allow us to use the lower bound on the mutual information rate as an approximation for the total transmitted information. Note that, although already a single Poisson process can show conventional stochastic resonance [35], with our linear encoding paradigm we exclude this possibility. In our models, the signal transmission in a single neuron is always degraded by noise.
In our theoretical model, we assume that all neurons fire to zerothorder in complete synchrony and a weak noise input, which is independent for every neuron, leads to a decorrelation of the output spike trains. For simplicity, we assume that for each neuron the independent noise process and the sensory signal are additive. Both, the sensory signal and the independent noise signals, are modeled by Gaussian processes with unit variance and zero mean.
The considered models can be regarded as inhomogeneous Poisson processes [36], which are ratemodulated by a common signal and an independent noise . Such processes are examples of a doubly stochastic process [37] or a Cox process and are a special case of the inhomogeneous Bernoulli process [38]. The simplicity of the considered models will allow us to characterise the information transfer of weak timedependent signals analytically. Note that the assumptions (1)–(3) made above describe, in good approximation, spiking in specific sensory systems, e.g. in tangential neurons of the fly visual system [20, 39, 40]. The additional modifications that make up the differences between our two models can be regarded as additional operations on the spike trains in the form of thinning (or the opposite of it) and the introduction of an operational time [37, 41].
Before we introduce in detail the two models sketched in Fig. 2, it is worth to note that, for weak stimuli and weak independent noise, these models possess the same signal–output crossspectrum , the same power spectrum , and the same timedependent output firing rate. Therefore, for the coherence function and the information rate are identical for both models. The models are mainly distinguished by how independent noise affects the spikes of the output spike trains, which results in different crossspectra of two spike trains. This setup allows us to study how the response of spikes to noise affects information transmission in neural populations, while keeping all other potential influences on signal transmission unchanged.
3.1 Addition and Deletion Model (AD Model)
In the following, we introduce the model for a population of spiking neurons where independent noise adds or deletes spikes. First, we discretize the time axis into bins of width Δt. We generate a spike in the j th time bin in the μ th spike train, whenever the following condition is fulfilled:
The common noise process ξ is uniformly distributed in and uncorrelated in time. The spikes are assigned the height such that the discrete spike train reads
where is the Heaviside function (implementing the indicator function) and the second argument of indicates the timediscretized version of the spike train. Here is the midpoint of the time bin where the k th spike of the μ th spike train was generated. In the limit the spike train approximates the sum of δfunctions given by Eq. (1).
We can compute the ensemble average of the spike train over the common noise ξ
The average is conditioned on specific realizations of the processes s and . As we show explicitly in Appendix A in Eq. (57), averaging additionally over the independent noise and the signal, one finds in the limit of
Throughout the paper, we will consider the limit , such that we can neglect correction terms like the one in the above equation.
In the left column of Fig. 3, we show how a sensory signal is encoded in the population firing rate of a population of five AD neurons and how the output spike trains of the neurons are modulated by independent noise.
3.2 SpikeTimeShifting Model (STS Model)
Next, we introduce the model for a population of spiking neurons where independent noise shifts the spike times of the output spike trains. To zerothorder the N neurons of the population generate identical spike trains
which we model by a homogeneous Poisson process with mean firing rate and spike times . For the μ th neuron, the times are transformed into new spike times via the transformation
with defined in Eq. (11). For a given spike time , we integrate the right hand side of Eq. (16), until the integral attains the value [36]. The resulting integration boundary is then the k th spike time of the μ th spike train . In general, due to the different independent noise processes , the output spike trains will be different for each neuron. Hereby, each spike train is an inhomogeneous Poisson spike train with a timedependent firing rate. The procedure described in this section is equivalent to the simulation of a perfect integrateandfire neuron with exponentially distributed thresholds [36]. The time t obtained after the transformation of the time axis h in Eq. (16) is also known as operational time [37, 41].
Although we do not model the underlying noise process explicitly, we think of the homogeneous spike trains in Eq. (15) as a result of a common noise process ξ, analogously to the AD model. By the average , we will denote the average over different realizations of the homogeneous Poisson spike trains in Eq. (15).
For a homogeneous Poisson spike train that is transformed according to Eq. (16) with , the average over the spike train for a fixed realization of the signal and the independent noise reads [36]. For a process that is not bound by zero this is not strictly fulfilled. Hence, ensemble averages over the spike train will contain correction terms that are proportional to the square root of the probability that is smaller than zero, which we calculated in Appendix A in Eq. (52). Consequently, using Eq. (11), we obtain for the averaged spike train
which in the limit leads to the same mean firing rate as for the AD model Eq. (14) in the limit of .
A simulation of five spike trains of the STS population, driven by a common noise process ξ, a common signal s, and independent noise processes , is shown Fig. 3e. Note that the modulation in Eq. (16) is very distinct from adding jitter to the single spike times, as is considered in [42–44], in that the modulation of the spike times presented here preserves the order of the spikes in each spike train. Other models that incorporate the deletion of spikes in a Poisson spike train [45] or a combination of deletion and shifting as in the thinning and shifting model [42, 44], differ from the models presented here in that the single spike trains of those models are homogeneous spike trains with constant rates. However, the models in the present paper are designed such that the single spike trains have a prescribed timedependent firing rate , which still depends on the realization of the signal s and the individual noise η. The crosscorrelations between spike trains are a consequence of the different implementations of the timedependent firing rate and are not prescribed a priori as in [42, 44, 45]. Even if the deletion or shifting of spikes in the thinning and shifting model is performed on a ratemodulated mother process, the resulting process would not be equivalent to the AD model or STS model, in which the addition and deletion of spikes and the shifting of spike times are not independent of the signal realization. In particular, the thinning and shifting model of a population of daughter processes for which the stimulus is solely encoded in the firing rate of the mother process cannot exhibit suprathreshold stochastic resonance.
3.3 Modeling the Common Signal and the Independent Noise Processes
The sensory signal s and the independent noise sources are modeled by Gaussian stochastic processes with zero mean and unit variance. For simplicity, we choose for both, the signal and the independent noise, a flat power spectrum,
where and are lower and upper cutoff frequencies, respectively. Throughout the paper, we will consider a finite upper cutoff frequency and a nonvanishing lower cutoff frequency. As we will show in our analytical calculation below, the crossspectrum for two spike trains of the STS model is finite only for . A realization of the common signal s is shown in Fig. 3a and 3d.
3.4 Simulations
In contrast to the AD model, the numerical measurement of the statistics of the STS model requires a careful choice of simulation parameters. Depending on the shape of the crossspectrum between different spike trains for the STS model, one has to choose a large simulation time to ensure stationarity and a very small time discretization to be able to resolve correlations between spike trains on small time scales. Furthermore, the coherence function systematically depends on the number of realizations used for the numerical averaging of the spectral statistics.The values of the time discretization Δt, the total simulation time T, and the number of realizations used for the numerical averaging of the spectral statistics are reported in Table 1.
4 Derivation of Spectral Measures
4.1 Input–Output Crossspectrum
In this section, we calculate the spectral measures that are necessary to quantify information transmission properties of the populations. We start by considering the input–output crosscorrelation function
The correction terms can be derived in complete analogy to the calculation in Appendix A. The first term in the above equation can be calculated using Eq. (11) and the fact that s and are Gaussian processes with unit variance and zero mean, which leads to
In the limit , keeping only the firstorder term in , the correction term in the above equation can be neglected. Then, after a Fourier transformation, we find the input–output crossspectrum
which is equal for both models.
4.2 Crossspectrum for Two Spike Trains for the AD Model
The crosscorrelation function between two spike trains is defined as
where and are different spike trains of a population with . The ensemble averages in the above equation are taken over four stochastic processes: The common noise ξ, the common signal s, and the independent noise processes and . Employing Eq. (14), we can write the second term in Eq. (20) as
The first term in Eq. (20) can be interpreted as a probability density [4]. Choosing a discrete variant of the spike train as introduced in Eq. (12), this leads to
(Pr stands for probability and ST stands for spike train). As we generate the spike trains in discrete time steps, we first consider the crosscorrelation function between two spike trains with a finite time discretization with . Splitting the expression in the above equation into two parts, one for and one for , we obtain
with
and
Note that, due to stationarity of the stochastic signals and spike trains, the probabilities in Eq. (21) do not depend on t. As described in Sect. 3.1, the values of realizations of the process ξ at different times are independent of each other, which allows us to average both spike trains separately leading to
Using Eq. (11) in the above equation and employing that and are independent Gaussian processes with zero mean we obtain in the limit
From the definition of the AD model in Eq. (11), we can infer that the probability of observing a synchronous spike in two spike trains equals the probability that the thresholds and are both higher than the realization of the common noise variable . Then, dropping the time arguments, the probability of synchronous spiking can be expressed as an average over two theta functions
As is shown in Eq. (59) in Appendix B, we can write the above expression for weak sensory signals as
Inserting Eqs. (22) and (24) in Eq. (21), and taking the limit , we obtain
and for the crosscorrelation function between two spike trains Eq. (20) we find
In the limit , keeping terms up to secondorder in and , we can neglect the correction terms in the above equation and find the following crossspectrum between two spike trains:
The above equation shows that by adding and deleting spikes the weak independent noise sources lead to a decorrelation of the two spike trains with a uniform decrease of power at all frequencies proportional to . The analytical result for the crossspectrum of two spike trains Eq. (25) for the AD model is compared with simulations in Fig. 4. Note that because the crosscorrelation function between two spike trains is symmetric with respect to τ, the crossspectrum is realvalued for all frequencies.
4.3 Crossspectrum for Two Spike Trains for the STS Model
In this section, we calculate the crossspectrum between spike trains μ and ν for the STS model. We first consider the autocorrelation function
of a homogeneous Poisson process with constant rate , where we use a slightly different notation than in Eq. (7). The last term in the above equation equals . The spike trains inside the first average can be expressed as derivatives of the spike count as in Eq. (1), such that
The power spectrum of a homogeneous Poisson process is constant and implies for the autocorrelation function of a homogeneous Poisson process
Combining Eq. (27) with Eq. (26), we obtain
Now we calculate the crosscorrelation function between two spike trains
of the full process, subject to an intrinsic noise ξ, independent noise processes and , and an input signal s. Employing Eq. (17), the last term in the above equation can be written as
The first term of the crosscorrelation function can be recast as before into
The ratemodulated Poisson process generated by the STS model is related to a homogeneous Poisson process with constant rate by the time transformation Eq. (16). We use this relation to link the inhomogeneous to the homogeneous spike count via
Using the above relation and Eq. (28), we find
Note that the above relation is valid only if is strictly larger than zero. Hence, we obtain for Eq. (31)
where the correction term is proportional to the square root of the probability that computed in Eq. (52). Using , employing the relation Eq. (62) derived in Appendix C, and substituting the variables and , we transform Eq. (33) into
Using the definition of Eq. (32), we can write the average over the delta function in Eq. (34) as
The new stochastic variable g is a sum of two integrals over Gaussian variables and therefore also a Gaussian variable. The average of the delta function over realizations of g is then the probability that g attains the value , and is given by
where is the variance of . In Appendix D, Eq. (64), we show that
for our specific choice of a flat noise power spectrum, introduced in Sect. 3.3. Employing Eqs. (35), (34), and (30) in Eq. (29) and expanding up to secondorder in , we obtain for the crosscorrelation function for two spike trains
We note that the linear term in vanishes due to the zero mean of the Gaussian signal . Equivalently, all higherorder odd terms in in Eq. (37) vanish due to the Gaussian nature of the signal (except for the correction term due to realizations of signal and individual noise that lead to ). From Eqs. (36) and (37) it can be seen that for a vanishing lower cutoff frequency of the independent noise spectrum (), the variance diverges and as a consequence of this the crosscorrelation between the two spike trains vanishes—only the part that is due to the signal (second term in Eq. (37)) still contributes.
After Fourier transforming Eq. (37) (neglecting the correction terms), we find the crossspectrum for two spike trains in the STS population,
In Fig. 4, the analytical result for the crossspectrum for two spike trains of the STS model Eq. (38) is compared with simulations. As for the AD model the crossspectrum of two spike trains is real valued. In contrast to the AD model Eq. (25), the crossspectrum of two spike trains for the STS model Eq. (38) exhibits a strong decrease at high frequencies, while it approaches the spike train power spectrum Eq. (39) at low frequencies. Note that, although we derived only up to secondorder in , the theory fits the simulation results very well even for .
4.4 Single Spike Train Power Spectrum
In Appendix E in Eqs. (67) and (72), we derive the spike train power spectrum which in the limit of and (keeping terms up to secondorder in and ) is equal for the AD and STS model
For and , the power spectrum is flat, as we would expect for homogeneous Poisson spike trains.
5 Information Transmission in Neural Populations
Here, we use the spectral measures derived in the previous section to study information transmission in two neural populations. The populations are constructed in such a way that they both encode the sensory signal in the timedependent population firing rate, and both exhibit identical singlespiketrain power spectra and identical signal–output crossspectra. The main difference between the populations lies in the effect that independent noise has on the spikes of the output. In one population independent noise adds and deletes spikes (AD model), while in the other independent noise leads to spiketimeshifting (STS model). We quantify the total of the transmitted information about the sensory signal via the lower bound on the mutual information rate Eq. (4),
and study information filtering by means of the coherence function Eq. (10),
The input–output crossspectrum Eq. (19) and the single spike train power spectrum Eq. (39) read
while the different crossspectra between two spike trains for the two different models are given by Eq. (25), and Eq. (38):
In all expression above, we considered the limits and . If the sensory signal is weak compared to the noise processes driving the neurons, as is assumed throughout this paper, the coherence is much smaller than one. This allows us to employ an approximation for the lower bound on the mutual information rate,
in the analytical calculations to obtain simpler expressions. In the subsequent sections, we will study information transmission in populations of AD neurons and STS neurons.
5.1 AD Population
Inserting the single spike train power spectrum Eq. (39), the input–output crossspectrum Eq. (19), and the crossspectrum for two spike trains Eq. (25) into Eq. (10), we find for the coherence function of the AD population
Here, we used that signal and noise have equal powerspectra , as described in Sect. 3.3. The coherence function for the AD model is plotted and compared with numerical simulations in Fig. 5. The only dependence of the coherence function Eq. (41) on frequency comes from the signal power spectrum . Therefore, for a flat signal power spectrum the coherence function of the AD model is also flat for frequencies . Consequently, a population of AD neurons can be referred to as a broadband filter of information, because the sum of the output spike trains contains equal amounts of information about different frequency bands of the signal.
Inserting the coherence Eq. (41) into Eq. (40) and employing Eq. (18), we obtain for the lower bound on the mutual information rate of the AD population
The approximate expression Eq. (42) is compared with simulations for two sets of parameters in Fig. 6.
For , the last term in the denominator of Eq. (42) vanishes and the lower bound of the mutual information rate can be simplified as
From the above equation, it becomes evident that in a single neuron an increase of the independent noise level can only decrease the lower bound on the mutual information rate. For , additional independent noise () has a positive effect on information transmission and SSR is observed. The denominator of Eq. (42) is a quadratic function in and exhibits a minimum at a finite level of independent noise, resulting in a maximum of the lower bound on the mutual information rate. To study the behavior of for weak independent noise, we expand Eq. (42) with respect to and obtain
with
The linear term in Eq. (44) is always positive. Hence, the population of AD neurons always profits from weak independent noise regardless of the specific choice of model parameters.
5.2 STS Population
Inserting the single spike train power spectrum Eq. (39), the input–output crossspectrum Eq. (19), and the crossspectrum for two spike trains Eq. (38) into Eq. (10), we find for the coherence function of the STS population
where and are defined in Eq. (38). As for the AD model discussed above, we used that signal and noise have equal powerspectra . Due to the frequency dependence of the crossspectrum , the coherence function also depends strongly on the frequency and exhibits a monotone increase as shown in Fig. 5. Thus, the population of STS neurons can be regarded as a highpass filter of information, similar to that observed for heterogeneous shortterm plasticity [16] or coding by synchrony [13, 15].
In order to understand the highpass filter effect in the coherence function as well as the stochastic resonance effect discussed below, we note that the crosscorrelations between different spike trains contribute largely to the sum’s output variability, in particular in the absence of intrinsic noise. This output variability is quantified by the output’s power spectrum and appears in the denominator of the coherence function. With individual intrinsic noise, spike times of different neurons are slightly shifted, drastically reducing crosscorrelations at high frequencies and thus the amount of the signalunrelated variability in these frequency bands. Therefore, the coherence function increases with frequency.
Inserting Eq. (46) into Eq. (40) and inserting the noise and signal power spectrum Eq. (18), we find for the lower bound on the mutual information rate of the STS population
The lower bound on the mutual information rate for the STS population is compared with simulations for two sets of parameters in Fig. 6. We observe that for the given parameters the STS model shows a large SSR effect, while the AD model profits only weakly from additional noise.
For , the frequency dependent term in the integrand of Eq. (47) vanishes and the lower bound on the mutual information rate transforms into
which is equal to Eq. (43) for the AD model. We compare the lower bound on the mutual information rate for for the two models numerically in Fig. 7 for different signal strengths and independent noise levels. For sufficiently low levels of independent noise, there is no difference in the amount of transmitted information for the AD and the STS model on the level of a single neuron. By construction, from the observation of one single spike train it is impossible to distinguish between the two models.
For , additional noise can have a positive effect on information transmission, as illustrated in Fig. 6. Increasing leads to a decrease of in the denominator of the integral Eq. (47), as already discussed in the beginning of this section in the context of the highpass coherence function. However, an increase of also increases the second term in the denominator of Eq. (47), which is proportional to . Therefore, whether SSR is observed depends on the specific parameter values chosen. As for the AD model, we expand the lower bound on the mutual information rate Eq. (47) with respect to and obtain
with defined in Eq. (45). The above expansion illustrates that, when the independent noise vanishes, the lower bound on the mutual information rate is identical for the two models for arbitrary N. The secondorder term in Eq. (48) can attain both negative and positive values depending on the choice of the model parameters. The condition that the secondorder term becomes negative and that the lower bound on the mutual information rate at is a decreasing function of reads
If the above condition is fulfilled, the weak individual noise does not improve the information transmission of a sensory signal and no SSR is observed. Two examples are shown in Fig. 8. In contrast to the AD model, where SSR is always observed for , the occurrence of SSR in the STS model depends on the specific choice of the model parameters.
Using Eq. (44) and Eq. (48), we can find for and a noise strength
for which the lower bound on the mutual information rate is equal for both models. From the above equation we can see that whether the STS population or the AD population transmits more information for a given value of independent noise is mainly determined by the noise and signal cutoff frequencies and .
Finally, let us illustrate in Fig. 9 the stochastic resonance effect when it is most pronounced, namely, in the STS model for a large number of neurons () and a high cutoff frequency (except for N, all parameters as in Fig. 6d). In this situation, we consider the lowpass filtered summed output of the population for different levels of the intrinsic noise. Without intrinsic noise (Fig. 9a), the output, i.e. the sum of N perfectly synchronized spike trains, does not resemble the input signal very much. It is important to note that according to Eq. (19) and Eq. (9) an average over many such runs would yield a time series that tracks the input signal closely. However, single runs (red, black, green) in the absence of the intrinsic noise are strongly unreliable. The right amount of intrinsic noise (used in Fig. 9b) desynchronizes the N spike trains, reduces crosscorrelations at high frequencies, and thus reduces output variability due to the common noise. Consequently, different realizations of the process for a frozen input signal look more similar and track the input signal reliably (cf. Fig. 9b). However, if we increase intrinsic noise to much higher levels, as in Fig. 9c, this noise itself starts to contribute significantly to the output variability and the reliability of signal transmission is diminished again.
6 Summary and Conclusions
In this paper, we investigated how the effect of noise on the output spikes influences information transmission properties of Poisson neurons. In particular, we considered two populations with strong common input, where in one case weak independent noise added and deleted spikes, while in the other it shifted spikes. In the limit of a weak sensory signal, we analytically derived the spectral statistics of both models and studied information filtering and the emergence of suprathreshold stochastic resonance (SSR). We showed that, even when single neurons of the AD model and STS model cannot be distinguished by their response statistics, the different effects of independent noise on spikes lead to qualitative and quantitative differences in information transmission on a population level.
In the AD model, the presence of the SSR effect is robust—whenever we consider a population with , a small amount of intrinsic noise has a beneficial effect on the signal transmission. In the STS model, the information transmission properties of the population are determined by the cutoff frequencies of the noise. Depending on the specific parameters, one finds a pronounced SSR in some regimes (exceeding the effect in the AD model by far) or no SSR effect in other regimes. Furthermore, we observe a highpass filtering of information in the STS model that is absent in the AD model.
There are a number of studies that explored theoretically the case of weakly correlated neurons and employed perturbation methods to relate output spike train correlations to input correlations [46–52]. In this paper, we have considered the opposite limit of strongly correlated spike trains that are only weakly decorrelated due to intrinsic noise sources. In this limit, we were not only able to derive comparatively simple expressions for the crosscorrelation between two spike trains but were also able to explore analytically the consequences of these correlations for the transmission of timedependent signals.
The question arises how the specific choice of the output, which is taken to be the sum of individual spike trains, affects the findings discussed above. The most general approach would be to study the multivariate mutual information between the input signal and the population of output spike trains. This quantity is hard to compute numerically and analytically, and its exact calculation is beyond the scope of this study. However, the mutual information between the input signal and the sum of outputs is a lower bound for the full multivariate mutual information, because the summation can only degrade the information content contained in the entire set of the output spike trains. Additionally, for vanishing individual noise, , all output spike trains are identical and the information content of the population does not differ from the information content of the sum of identical spike trains. Therefore, if the mutual information between the input signal and the summed output increases with individual noise, i.e. exhibits suprathreshold stochastic resonance, the full multivariate mutual information increases as well.
The mutual information between the input signal and the summed output has been estimated here by its lower bound . In our setting with a weak signal that is encoded in the firing rate of the Poisson process, we expect that this bound is rather tight. In fact, for a single inhomogeneous Poisson process, the mutual information and its lower bound coincide in leadingorder of the signal amplitude [53].
In this study, we inspected two simple and abstract models for the effect of a weak noise on neural spikes and its consequences on signal transmission by neural populations. We would like to emphasize that the pure limits of an AD model or an STS model approximate the behavior of biophysical neuron models. On one hand, it is plausible that in an excitable neuron model, in which the crossing of a threshold may be aided or prevented by a weak driving, addition and deletion of spikes as in our AD model can be observed. Stochastic oscillators, on the other hand, display a shifting of spike times due to a weak driving, as described by the phase response curve [54]. In between these limits, we expect a combination of both, addition and deletion as well as shifting of spikes. Indeed, such a combination has been observed experimentally [55]. Hence, a generalization of our framework to a Poisson process that includes both effects and allows one to tune gradually between the pure AD and STS models inspected in this paper would be certainly worth additional efforts in a future study.
Appendix A: Mean Firing Rate of the AD and STS Model
For the AD model the average of the spike train over the intrinsic noise is given by Eq. (13) as
The average of the spike train over all stochastic processes can now be written as
With from Eq. (11) the first term in the above equation gives
as s and η are Gaussian processes with unit variance and zero mean. For the other terms in Eq. (49), we will show that they are of higherorder in Δt, , and . For the second term in Eq. (49), we can find an upper bound using the Cauchy–Schwarz inequality
The average in the last line of the above equation is the probability that is smaller than zero and is given by
which gives for Eq. (51)
For the third term in Eq. (49) we can find an upper bound using again the Cauchy–Schwarz inequality
where in the last line we have dropped the mixed term that is always negative. Furthermore, note that the average in the last line of Eq. (54) is the probability that is larger than and is given by
which gives for Eq. (54)
Inserting Eqs. (50), (53), and (56) into Eq. (49), we obtain the mean firing rate for the AD model
In the limit of , the last term in the above equation can be dropped and we obtain
A similar estimation leads to the same formula for the STS model.
Appendix B: Probability for Synchronous Spikes in the AD Model
For the AD model, according to Eq. (23), the probability to observe two spikes in a time window Δt in two spike trains and is given by
where is a Gaussian distribution with unit variance and zero mean. Splitting the integration interval of the last integral in the above equation into two parts, such that for one interval and for the other , we obtain
which after a change of the order of integration can be transformed into
The average of the theta function over ξ reads
Using the above equation, Eq. (58) can now be written as
The order of the second and the third term in the above equation can be calculated analogously to the calculation in Appendix A which leads to
Appendix C: Simplification of the Crosscorrelation Function for the STS Model
In this section, we simplify Eq. (33) in Sect. 4.3. Therefore, we first consider
where the average over is conditioned on fixed realizations of the signal and the noise process . The variable is defined in Eq. (32). We first express the delta function in the above expression as a derivative of a Heaviside function, which leads to
As we will show in Eq. (63), the variance becomes constant in the limit of large . Performing the derivative with respect to in Eq. (61), we find
Combining the above equation with Eq. (60), we finally obtain the relation
Using the above result, we rewrite the average over the delta function in Sect. 4.3 in Eq. (33) as
Expressing the delta function in the above equation as the derivative of a Heaviside function with respect to as in Eq. (61) and following the steps from the calculation of , we find
which leads to the result
used in the calculation of the crosscorrelation function in Eq. (34).
Appendix D: Variance of the Integrated Independent Noise
To calculate the variance from Eq. (35), we first consider
for a Gaussian signal with zero mean and unit variance. Performing the average in the above equation and performing a change of integration variables, we can write
Next, we express the autocorrelation function in the above equation by its Fourier transform according to Eq. (6) and find
where in the last line of the above equation we integrated over τ. Since the power spectrum is the Fourier transform of a real function, it is symmetric with respect to f, which leads to
For a bandpass limited white noise with the power spectrum
we obtain in the limit of large times
and for the variance in Eq. (35) for large t we find
Appendix E: Single Spike Train Power Spectrum
In this section, we calculate the single spike train power spectrum for the AD model. Therefore, we consider the autocorrelation function of the spike train
where we assume that the spike trains are stationary. Since a single spike train is considered and the average is taken over one independent noise process η, we will drop the subscript employed previously.
We first consider the AD model. As for the timediscrete crosscorrelation function between two spike trains in Sect. 4.2, we can express Eq. (65) in terms of probability densities
where is the probability to find a spike in a given time bin of width Δt and is the probability to find two spikes separated by the time interval τ with . Analogously to Eq. (23) we find
where in the first line of the above equation we dropped the subscript and the timeargument for the parameter r, which is defined in Eq. (11). For the probability of asynchronous spikes with we find
which in the limit of leads for the autocorrelation function Eq. (66) to
Taking the limit in the above equation (keeping terms up to secondorder in and ), we find after a Fourier transformation the single spike train power spectrum for the AD model
Next, we calculate the single spike train power spectrum for the STS model. Employing Eq. (17), the last term in Eq. (65) can be written as
In analogy to Eq. (33), we can transform the first term of the autocorrelation function Eq. (65) into
with the difference that we now consider a single spike train, and therefore only averages over one independent noise process η. The above equation can be rewritten as
After a Fourier transformation of the above equation, the autocorrelation functions of signal and noise are transformed into their respective powerspectra. Inserting the definition of Eq. (32), the first term of Eq. (69), which we denote by W, is Fourier transformed into
Changing the integration variable in Eq. (70), we find
For a strictly positive process (see Eq. (11)), the only zero crossing of the integral is at . We can invert this relation to find , which leads to
The order of the correction term in the above equation is proportional to the square root of the probability that , which has been calculated in Appendix A Eq. (52).
Employing Eqs. (65), (68), (69), and (71) we find in the limit (keeping terms up to secondorder in and ) the single spike train power spectrum for the STS model
References
 1.
Tuckwell HC: Stochastic Processes in the Neuroscience. 1989.
 2.
Faisal AA, Selen LPJ, Wolpert DM: Noise in the nervous system.Nat Rev Neurosci 2008, 9: 292.
 3.
Shannon R: The mathematical theory of communication.Bell Syst Tech J 1948, 27: 379.
 4.
Rieke F, Warland D, de Ruyter van Steveninck R, Bialek W: Spikes: Exploring the Neural Code. MIT Press, Cambridge; 1999.
 5.
Wiesenfeld K, Moss F: Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs.Nature 1995, 373: 33.
 6.
Hänggi P: Stochastic resonance in biology.ChemPhysChem 2002, 21: 285290.
 7.
McDonnell MD, Ward LM: The benefits of noise in neural systems: bridging theory and experiment.Nat Rev Neurosci 2011, 12: 415.
 8.
Gammaitoni L, Hänggi P, Jung P, Marchesoni F: Stochastic resonance.Rev Mod Phys 1998, 70: 223.
 9.
Lindner B, GarciaOjalvo J, Neiman A, SchimanskyGeier L: Effects of noise in excitable systems.Phys Rep 2004, 392: 321.
 10.
Stocks NG: Suprathreshold stochastic resonance in multilevel threshold systems.Phys Rev Lett 2000, 84: 2310.
 11.
Stocks NG, Mannella R: Generic noiseenhanced coding in neuronal arrays.Phys Rev E 2001., 64: Article ID 030902
 12.
Chacron MJ, Doiron B, Maler L, Longtin A, Bastian J: Nonclassical receptive field mediates switch in a sensory neuron’s frequency tuning.Nature 2003, 423: 77.
 13.
Middleton JW, Longtin A, Benda J, Maler L: Postsynaptic receptive field size and spike threshold determine encoding of highfrequency information via sensitivity to synchronous presynaptic activity.J Neurophysiol 2009, 101: 1160.
 14.
Lindner B, Gangloff D, Longtin A, Lewis JE: Broadband coding with dynamic synapses.J Neurosci 2009, 29: 2076.
 15.
Sharafi N, Benda J, Lindner B: Information filtering by synchronous spikes in a neural population.J Comput Neurosci 2013, 34: 285.
 16.
Droste F, Schwalger T, Lindner B: Interplay of two signals in a neuron with heterogeneous synaptic shortterm plasticity.Front Comput Neurosci 2013., 7: Article ID 86
 17.
Knight BW: Dynamics of encoding in a population of neurons.J Gen Physiol 1972, 59: 734.
 18.
Gerstner W, Kistler WM: Spiking Neuron Models. Cambridge University Press, Cambridge; 2002.
 19.
Alonso JM, Usrey WM, Reid RC: Precisely correlated firing in cells of the lateral geniculate nucleus.Nature 1996, 383: 815.
 20.
Warzecha AK, Kretzberg J, Egelhaaf M: Temporal precision of the encoding of motion information by visual interneurons.Curr Biol 1998, 8: 359368.
 21.
Trong PK, Rieke F: Origin of correlated activity between parasol retinal ganglion cells.Nat Neurosci 2008,11(11):13431351.
 22.
Churchland M, Byron M, Cunningham J, Sugrue LP, Cohen MR, Corrado GS, Newsome WT, Clark AM, Hosseini P, Scott BB, Bradley DC, Smith MA, Kohn A, Movshon JA, Armstrong KM, Moore T, Chang SW, Snyder LH, Lisberger SG, Priebe NJ, Finn IM, Ferster D, Ryu SI, Santhanam G, Sahani M, Shenoy KV: Stimulus onset quenches neural variability: a widespread cortical phenomenon.Nat Neurosci 2010, 13: 369378.
 23.
Steinmetz PN, Roy A, Fitzgerald PJ, Hsiao SS, Johnson KO, Niebur E: Attention modulates synchronized neuronal firing in primate somatosensory cortex.Nature 2000, 404: 187190.
 24.
Stopfer M, Bhagavan S, Smith BH, Laurent G: Impaired odour discrimination on desynchronization of odourencoding neural assemblies.Nature 1997, 390: 7074.
 25.
Kazama H, Wilson RI: Origins of correlated activity in an olfactory circuit.Nat Neurosci 2009,12(9):11361144.
 26.
Poulet JFA, Petersen CCH: Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice.Nature 2008, 454: 881.
 27.
Gentet LJ, Avermann M, Matyas F, Staiger JF, Petersen CCH: Membrane potential dynamics of gabaergic neurons in the barrel cortex of behaving mice.Neuron 2010,65(3):422435.
 28.
Binder MD, Powers RK: Relationship between simulated common synaptic input and discharge synchrony in cat spinal motoneurons.J Neurophysiol 2001, 86: 2266.
 29.
Neiman AB, Russell DF: Two distinct types of noisy oscillators in electroreceptors of paddlefish.J Neurophysiol 2004, 92: 492.
 30.
Warzecha AK, Rosner R, Grewe J: Impact and sources of neuronal variability in the fly’s motion vision pathway.J Physiol 2013,107(1):2640.
 31.
Bialek W, Deweese M, Rieke F, Warland D: Bits and brains—informationflow in the nervoussystem.Physica A 1993, 200: 581.
 32.
Gabbiani F: Coding of timevarying signals in spike trains of linear and halfwave rectifying neurons.Netw Comput Neural Syst 1996, 7: 61.
 33.
Press WH, Teukolsky SA, Vetterling WT, Flannery BP: Numerical Recipes: The Art of Scientific Computing. 3rd edition. Cambridge University Press, Cambridge; 2007.
 34.
Gardiner CW: Handbook of Stochastic Methods. 1985.
 35.
Bezrukov SM, Vodyanoy I: Stochastic resonance in nondynamical systems without response thresholds.Nature 1997, 385: 319321.
 36.
Gabbiani F, Cox SJ: Mathematics for Neuroscientists. Academic Press, San Diego; 2010.
 37.
Cox DR, Isham V: Point Processes. Chapman & Hall, London; 1980.
 38.
Feller W 1. In An Introduction to Probability Theory and Its Applications. 3rd edition. Wiley, New York; 1968.
 39.
Gestri G, Mastebroek HAK, Zaagman WH: Stochastic constancy, variability and adaptation of spike generation—performance of a giantneuron in the visualsystem of the fly.Biol Cybern 1980, 38: 31.
 40.
de Ruyter van Steveninck RR, Lewen GD, Strong SP, Koberle R, Bialek W: Reproducibility and variability in neural spike trains.Science 1997, 275: 1805.
 41.
Feller W 2. In An Introduction to Probability Theory and Its Applications. 2nd edition. Wiley, New York; 1971.
 42.
Bauerle N, Grubel R: Multivariate counting processes: copulas and beyond.ASTIN Bull 2005,35(2):379.
 43.
Neiman AB, Russell DF, Rowe MH: Identifying temporal codes in spontaneously active sensory neurons.PLoS ONE 2011., 6: Article ID e27380
 44.
Trousdale J, Hu Y, SheaBrown E, Josic K: A generative spike train model with timestructured higher order correlations.Front Comput Neurosci 2013., 7: Article ID 84
 45.
Kuhn A, Aertsen A, Rotter S: Higherorder statistics of input ensembles and the response of simple model neurons.Neural Comput 2003,15(1):67101.
 46.
MorenoBote R, Parga N: Auto and crosscorrelograms for the spike response of leaky integrateandfire neurons with slow synapses.Phys Rev Lett 2006., 96: Article ID 028101
 47.
de la Rocha J, Doiron B, SheaBrown E, Josic K, Reyes A: Correlation between neural spike trains increases with firing rate.Nature 2007, 448: 802.
 48.
SheaBrown E, Josić K, de la Rocha J, Doiron B: Correlation and synchrony transfer in integrateandfire neurons: basic properties and consequences for coding.Phys Rev Lett 2008., 100: Article ID 108102
 49.
Ostojic S, Brunel N, Hakim V: How connectivity, background activity, and synaptic properties shape the crosscorrelation between spike trains.J Neurosci 2009, 29: 10234.
 50.
Vilela RD, Lindner B: A comparative study of three different integrateandfire neurons: spontaneous activity, dynamical response, and stimulusinduced correlation.Phys Rev E 2009., 80: Article ID 031909
 51.
Abouzeid A, Ermentrout B: Correlation transfer in stochastically driven neural oscillators over long and short time scales.Phys Rev E 2011.,84(6): Article ID 061914
 52.
SchultzeKraft M, Diesmann M, Grun S, Helias M: Noise suppression and surplus synchrony by coincidence detection.PLoS Comput Biol 2013.,9(4): Article ID e1002904
 53.
Bialek W, Zee A: Coding and computation with neural spike trains.J Stat Phys 1990, 59: 103.
 54.
Ermentrout GB, Terman DH: Mathematical Foundations of Neuroscience. Springer, New York; 2010.
 55.
Malyshev A, Tchumatchenko T, Volgushev S, Volgushev M: Energyefficient encoding by shifting spikes in neocortical neurons.Eur J Neurosci 2013,38(8):31813188.
Acknowledgements
This work was funded by the BMBF (FKZ:01GQ1001A and FKZ:01GQ1001B).
Author information
Affiliations
Corresponding authors
Correspondence to Sergej O Voronenko or Wilhelm Stannat or Benjamin Lindner.
Additional information
Competing Interests
The authors declare that they have no competing interests.
Authors’ Contributions
SV carried out the analytical calculations and performed the numerical simulations. WS assisted in the analytical calculations for the STS model. BL conceived and guided the study. SV and BL wrote the manuscript. All authors contributed improvements to the final manuscript, which they have read and approved.
Rights and permissions
About this article
Received
Accepted
Published
DOI
Keywords
 Timedependent input
 Population coding
 Common noise
 Shifting of spikes
 Addition and deletion of spikes
 Mutual information
 Suprathreshold stochastic resonance