A Simple Algorithm for Averaging Spike Trains
© H. Julienne, C. Houghton; licensee Springer 2013
Received: 17 May 2012
Accepted: 15 February 2013
Published: 25 February 2013
Skip to main content
© H. Julienne, C. Houghton; licensee Springer 2013
Received: 17 May 2012
Accepted: 15 February 2013
Published: 25 February 2013
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Spike trains are highly variable, with the same stimulus causing different responses for different trials. While a stimulus will modulate a neuron’s firing pattern on a longer timescale, noise will affect spike timings on a shorter timescale, masking the message encoded . Therefore, it would often be useful to be able to summarize a set of such responses by averaging them, giving a single exemplar. This would speed up computations based on spiking responses and focus studies of coding in spike trains on features that are common across all responses to a given stimulus.
There have been numerous attempts to effectively summarize responses. These include the calculation of the peristimulus time histogram or spike density function [4, 5, 20] and the development of algorithms to calculate an ‘average’ or ‘central’ spike train . Here, an algorithm for averaging spiking responses is proposed. It uses an average filtered function to construct a central spike train. The spike trains are mapped into the space of functions by filtering them with a causal exponential filter. The average of these functions is calculated. This average function is then mapped back to a spike train by finding a sequence of spikes whose filtered function is close to the average function.
This is an instance of the well-studied problem in kernel methods of finding the pre-image of a point . Here, the calculation is performed approximately using a greedy algorithm, a type of matching pursuit algorithm . It can be implemented efficiently in this case because of the exponential filter used to map the spike trains to the space of functions.
These central spike trains are tested on a large data set recorded from zebra finch auditory neurons by the Frederic Theunissen laboratory and made available on the Collaborative Research in Computational Neuroscience database . The effectiveness of the central spike train in summarizing the set of responses is studied in various ways. Perhaps most importantly, the central spike trains are tested using a transmitted information measure of metric-based classification. The performance of the central spike train as a classification template is compared to the performance of the obvious alternative, the medoid response. The medoid of a set of responses is taken to be the response in the set with the lowest average distance from the rest of the set. It is found that the central spike trains appear to be considerably more effective at summarizing the responses than the medoid responses.
The algorithm works by mapping all the spike trains to functions, averaging these in the function space and then finding the spike train that best corresponds to this average function.
where the normalization factor of is convenient because it means . τ is a timescale. In the example considered in the Results section (Sect. 3), this timescale is chosen to match the timescale associated with the optimal metric-based clustering of the responses.
The choice of kernel function is motivated by the van Rossum metric, which also involves filtering . Indeed, the whole approach is motivated by the idea, illustrated by the van Rossum metric that a useful way to calculate with spike trains is to first map them into the space of functions. This is in the spirit of the kernel-based smoothing often performed in estimating inhomogeneous neuronal firing rates and also in the spirit of the reproducing kernel Hilbert space framework for spike train signal processing described in .
The value of this function is easily computed and so it can be rapidly minimized with respect to using, for example, the Brent or golden section method .
It might seem that the algorithm should continue only while the minimum value of is negative. However, this tends to give central spike trains with fewer spikes than the average spike number in the collection. This appears to be an artefact of the use of a causal filter. Roughly speaking, if a spike time in the central spike train can be thought of as standing for a group of spikes from the original spike trains, then there is a residue left behind in of where is a step function which is one for and zero elsewhere.
A better approach is to continue choosing the that minimizes , whether this minimum is negative or positive, until the central spike train has a length that matches the average train length in the collection. The quantitative effect this has on central spike trains for the example application is given in the Results section (Sect. 3). In fact, the performance of the central spike train is hardly changed so the choice of one halting criterion over the other appears to be a matter of taste. It depends on whether it is useful to have a central spike train whose spike count matches the average for the collection.
Obviously, from the perspective of the spike train metric, the true average of the spike trains is given by the function average . However, this function average is not itself a spike train. The construction described here aims to find the spike train, , whose image, , is as close as possible to the function average. This can be seen as a particular instance of the more general question of finding point process prototypes, as explored in .
will not include . This cannot be avoided. However, the aim here is to summarize a collection of responses to repeated presentations of a single stimulus. The function average is not a summary in that its most concise representation is given by the times of all the spikes in . Since the algorithmic expense of calculating the distance between two spike trains is of the order of the number of spikes , this means, for example, that it is as expensive to calculate the distance between a novel spike train v and the function average as it is to calculate the distance between v and all of the spike trains individually. Calculating the distance between v and the central spike train is thus n times faster.
The second contribution to the difference between and is the use of the greedy algorithm: may not actually be the function in which is closest to . Using the greedy algorithm is an efficient way of finding , but it is necessary to check that the result is close to the optimal choice of central spike train. This is examined in the Results section (Sect. 3) where a genetic algorithm is used to improve on the greedy algorithm result. It is seen that the improvement is minimal.
The averaging algorithm has been tested using the very large zebra finch data set collected by the Frederic Theunissen laboratory at UC Berkeley  and made available to the Collaborative Research in Computational Neuroscience database. The details of the experiment and of the stimulus set are given in [2, 8, 26–28]. The data set consists of extracellular recordings from neurons in the auditory pathway of anesthetized zebra finches. Different sound stimuli are used, including a corpus of zebra finch songs. The song responses are considered here. The song corpus generally includes 20 songs. Here, for simplicity, only those data sets with a 20 song corpus and ten trials for each song are used. To make all the spike trains, the same temporal length the first second is used; the length of the stimuli vary, but all are at least one second long. Although the algorithm described here works fine if some empty spike trains are included in the collection to be summarized, for ease of comparison with other methods any cell with empty spike trains is excluded. This gives a total of 183 cells for which a data set of 200 spike trains has been recorded.
A clustering measure has been used for testing the averaging algorithm. Roughly, the central spike trains are used as a template for clustering by song and the accuracy of this classification is then used as a measure of how well the algorithm performs. Obviously, a test based on distance-based clustering requires the choice of a distance measure between spike trains. Here, the van Rossum metric is used .
where and are the filtered spike trains, as before. The van Rossum metric requires a choice of timescale; a choice of the τ in the decaying exponential in the kernel, Eq. (3). Here, τ is chosen to give the best clustering according to the transmitted information based measure proposed in . It is worth describing this in detail, since a similar measure is used to evaluate the averaging algorithm.
where z is intended to reduce the effect of outliers, with being a typical choice. For each test response, the distances are compared and the stimulus corresponding to the smallest distance is noted. Labeling the stimulus corresponding to the nearest cluster as , one is then added to .
is a useful measure of the accuracy of clustering indicated by the confusion matrix. The maximum value of the transmitted information h is obtained for perfect clustering of equally likely stimuli, in which case . h is the mutual information between the clustering and this perfect clustering . For convenience, a normalized information is used.
The same τ is used in the averaging algorithm. The minimization of the error is also performed numerically using a golden section. This is initialized for each iteration of the greedy algorithm with the triplet of values , where is found by evaluating at the values for integer r and , and then choosing the value which gives the smallest error.
Although is intended as a sort of proxy for the information transmitted by the unsupervised clustering of responses using the pairwise distances, it is derived from the supervised procedure described here . This same procedure is used to evaluate the central spike trains. Since the supervised algorithm matches test spike trains against the stimulus-defined clusters, it gives a useful benchmark for a procedure where test spike trains are matched to central spike trains.
Thus, the central spike trains are evaluated using a transmitted information measure. Again, each response is considered in turn and the remaining responses are clustered by stimulus. In this case, however, the distance between the test response and the clusters is calculated by finding the central spike train for each cluster and measuring the distance between the test response and this central response. Since the test response has been removed from its cluster, it is not used in calculating the central spike train, making this a cross-validation procedure. The confusion matrix and transmitted information are calculated in the usual way.
As a comparison, the transmitted information is also calculated using the same weighted average metric distance defined above, both with , giving the straight-forward average and with to underweight outliers. Additionally, the transmitted information is calculated using the function average for each stimulus to cluster the data. Finally, the transmitted information is calculated using the medoid spike train instead of the central one.
Transmitted information results. In the first column, the transmitted information , averaged over all 183 cells, is given for clustering to the central response (central) and to the medoid response (medoid). For comparison, the transmitted information is also shown for classification methods which involve all the spike times; the clustering to all the other responses using the weighted average distance (all ), the clustering to all using the unweighted average distance (all ) and clustering to the function average (function). The second column gives the fraction of cells for which for the other four clustering methods is larger than for the central clustering. The third column shows the average relative value calculated for each method by dividing the transmitted information for each cell by the transmitted information using the central spike train and averaging, the figure after the ± is the one-sigma variation in this number
Better than central
Relative to central
0.70 ± 0.15
All z = −2
0.93 ± 0.07
All z = 1
0.84 ± 0.12
1.05 ± 0.07
Halting criteria comparison. In the first column, the transmitted information , averaged over all 183 cells, is given for clustering to the central response (central) and to an alternative central response with a different criterion for halting the process of adding spikes (alternative). For (central), the central response has the same number of spikes as the average, rounding down, for the cluster it summarizes. For (alternative) spikes are added to the central response while the given in Eq. (9) remains negative. The second column gives the average number of spikes for each. The third column gives the fraction of cells for which (alternative) has a value great than the value for (central), the fourth column gives the average relative value, with the one-sigma variation
Better than central
Relative to central
0.98 ± 0.06
The best clustering comes from using the function average . This is unsurprising; as discussed in the Methods section (Sect. 2), the function average represents the average in the function space in which the metric distances are calculated. The aim here is to find a spike train which maps to a point in the subspace of filtered spike trains, , which is as close as possible to this function average.
This optimization was performed for the responses to one song chosen at random for each cell. It was found that the distance between the central spike train and the function average after this optimization is 0.967 times what it was before, on average. As a comparison, the distance between the medoid and the function average is, on average, 1.407 times the distance of the central spike train from the function average.
This was examined on the example data set by replacing each step of the greedy algorithm with a two-dimensional optimization over spike time and weight using the Nelder–Mead method, initialized using a grid search . However, although this does decrease the error ℰ, it causes only a tiny improvement of the clustering performance.
Another measure of centrality is the summed distance between a spike train and the spike trains in the collection . The medoid is chosen as the spike train in the collection that minimizes this distance. As a measure of how central the central spike train is, the summed distance between it and the other spike trains for the song is compared to the summed distance between the medoid and the spike trains. The summed distance for the medoid is, on average, 1.19 times the summed distance for the central spike train. The function average is even more central, its summed distance is on average 0.87 times the summed distance for the central spike train. However, this represents the center of the data in a larger space, the space of functions, rather than the image in the function space of the space of spike trains.
Thus, the medoid is much less central than the central spike train. This may seem surprising since the medoid is chosen as the most central spike train in the collection. However, it is normal in high-dimensional data for the medoid to lie some distance from the center of a data set because the volume around the center is a smaller fraction of the total volume in which the data points fall. For example, for uniform distributed data, the fraction of a unit ball in D-dimensions which is within ϵ of the center is . While it is difficult to associate a dimension with the space of spike trains, the one second spike trains considered here do behave like they belong to a high-dimensional space .
Clustering using the Victor–Purpura metric. In the first column, the transmitted information , averaged over all 183 cells, is given for clustering using the Victor–Purpura metric. Here, the central spike train has been calculated in the same way as it has elsewhere, but everything else is calculated using the Victor–Purpura metric; in particular, the medoid is the Victor–Purpura medoid and the clustering performed to calculate the confusion matrix, and hence the transmitted information, depends on the Victor–Purpura distances. As before, central gives results for the central spike train, medoid for the medoid and (all ) and (all ) use the average weighted and unweighted distances. The function average is not considered in this case since calculating the distance to the function average would involve extending the Victor–Purpura metric to deal with an object of this sort. The second column gives the fraction of cells for which for the other three clustering methods is larger than for the central clustering, the third column shows the average relative value calculated for each method with the one-sigma variation in this number
Better than central
Relative to central
0.74 ± 0.16
All z = −2
1.08 ± 0.12
All z = 1
0.97 ± 0.17
Although simple and straightforward, the averaging algorithm succeeds extremely well in summarizing the response sets in the data considered here. It is anticipated that this will have numerous practical applications in analyzing sets of electrophysiological responses.
It would be interesting to evaluate the algorithm using different data sets and to measure the extent to which it preserves the internal temporal features of the spike trains. It has been previously noted that averaging over many responses may obscure such features . The aim of this paper is to define a central spike train, an object which can be interpreted as a sort of average, but which is nonetheless a spike train. This is achieved in the sense that the central spike train is a list of spike times, but that does not necessarily mean it shares the less easily-specified properties possessed by real spike trains. For example, real spike trains often have inter-spike interval distributions which are well described by a Gamma distribution or an inverse Gauss distribution [6, 7]. However, as a type of average, the central spike train might differ with respect to properties of this sort, precisely because noise has been removed.
In fact, this is what seems to happen for the data examined in the Results section (Sect. 3). By design, the mean inter-spike interval for the central spike train s, matches that for the real spike trains, s. However, the standard deviation of the inter-spike interval length is s for the central spike train, which is considerably smaller than the value for the real spike trains, s. Thus, the central spike trains are more regular than the spike trains they summarize. A generative model could be envisaged where the spike times and spike counts of the central spike train are varied to give a collection of spike trains whose statistical properties match those of the original experimental spike trains. Of course, this would involve a fuller investigation of the statistical properties of the central spike train, such as the higher order statistics of the inter-spike intervals  and the spike-spike correlation histograms.
It is hoped that temporal properties crucial to the coding structure of the spike train can be largely preserved in a single exemplar like the central spike train. This is typically the hope when constructing an average; it does not contain all the information present in the original collection of responses, but does include a substantial part of the signal, as opposed to the noise responsible for trial-to-trial variation. In contrast, in the peri-stimulus time histogram the spikes across trials are aggregated into bins. Binning also constitutes an approach to summarizing a collection of trials, but it does so using an object, which does not resemble the original signal. Thus, the peri-stimulus time histogram reduces temporal precision through binning, and the central spike train removes trial-to-trial structure by representing the collection as a single spike train. In each case, information is lost, but it is hoped that this lost information is noise. The peri-stimulus time histogram replaces the original spike trains with something like a rate, the construction here replaces them with the central spike train. In studying coding, this may have the advantage that, as a spike train, the effect of the central spike train on a post-synaptic neuron can be investigated.
One disadvantage of this algorithm is that it requires a value for τ, the timescale. Typically, this is chosen so as to optimize the metric clustering and this optimization requires the calculation of the distance matrix for the responses for different values of τ. In applications with large data sets where the results are needed rapidly, this might become problematic. Consequently, it would be interesting to consider other methods of choosing τ based on easily-accessed properties of the spike trains such as spike number and spike train to spike train variability.
The average is the first moment of a random variable. Often it is useful to also examine quantities such as variance, which are derived from higher moments; for example, it might be interesting to examine whether some types of input produce a noisier output than others. Certainly, it is easy to estimate the variance in the function space, giving a form of the peri-stimulus variance histogram . However, this is a function and it is not clear how to interpret the function variance in terms of spike trains. This would be an interesting topic for further work. Another approach would be to examine the distribution of displacements between the central spike train and the spike trains in the collection, something that has previously been considered using pair-wise displacement in the collection .
It might also be informative to use the central spike train to average over different neurons rather than over different trials. In this application, deviation from the average would correspond, in part, to aspects of coding which differ from a summed population code.
The choice of the exponential kernel is somewhat arbitrary. However, from the example of the van Rossum metric  and of kernel density estimation in statistics , it is unlikely that changing the kernel will have a strong effect. One interesting idea might be to use the actual jitter distribution of spike times in the set of responses as a kernel. It would also be interesting to consider the biological basis for the averaging algorithm itself. Perhaps the electrodynamics of neurons can be, in part, interpreted as an averaging algorithm of this sort. Some models of auditory object recognition rely on the use of ‘template’ or ‘memory’ spike trains . Perhaps the central spike train could play a role here.
We are very grateful to Frederic Theunissen, Sarah M.N. Woolley, Thane Fremouw, and Noopur Amin for their generosity in making their data available on the Collaborative Research in Computational Neuroscience website and to an anonymous referee for suggesting the weighted spike train approach discussed in the Results section (Sect. 3). We thank the James S. McDonald Foundation for financial support through CJH’s Scholar Award in Understanding Human Cognition.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.