 Research
 Open Access
 Published:
Effect of Neuromodulation of Shortterm Plasticity on Information Processing in Hippocampal Interneuron Synapses
The Journal of Mathematical Neuroscience volume 8, Article number: 7 (2018)
Abstract
Neurons in a microcircuit connected by chemical synapses can have their connectivity affected by the prior activity of the cells. The number of synapses available for releasing neurotransmitter can be decreased by repetitive activation through depletion of readily releasable neurotransmitter (NT), or increased through facilitation, where the probability of release of NT is increased by prior activation. These competing effects can create a complicated and subtle range of timedependent connectivity. Here we investigate the probabilistic properties of facilitation and depression (FD) for a presynaptic neuron that is receiving a Poisson spike train of input. We use a model of FD that is parameterized with experimental data from a hippocampal basket cell and pyramidal cell connection, for fixed frequency input spikes at frequencies in the range of theta (3–8 Hz) and gamma (20–100 Hz) oscillations. Hence our results will apply to microcircuits in the hippocampus that are responsible for the interaction of theta and gamma rhythms associated with learning and memory. A control situation is compared with one in which a pharmaceutical neuromodulator (muscarine) is employed. We apply standard informationtheoretic measures such as entropy and mutual information, and find a closed form approximate expression for the probability distribution of release probability. We also use techniques that measure the dependence of the response on the exact history of stimulation the synapse has received, which uncovers some unexpected differences between control and muscarineadded cases.
Introduction
Neuronal activity can have profound effects on functional connectivity at the level of synapses. Through repetitive activation, the strength, or efficacy, of synaptic release of neurotransmitter (NT) can be decreased through depletion, or increased through facilitation. Both of these competing processes involve intracellular calcium and may occur within a single synapse. Different time scales of facilitation and depression enable temporally complex functional connectivity. Here we investigate the probabilistic properties of facilitation and depression (FD) for a presynaptic neuron that is receiving repetitive in vivo like Poisson spike train of input, e.g. the interspike interval follows an exponential distribution. We use a model of FD that was parameterized with experimental data from dual wholecell recordings from a presynaptic parvalbuminpositive (PV) basket cell (BC) connected to a postsynaptic CA1 (Cornu Ammonis 1 subregion) pyramidal cell, for family of fixed frequency input spikes into the presynaptic PV BC [1]. Our results will thus apply to microcircuits in the hippocampus that participate in the generation of theta and gamma rhythms associated with learning and memory.
The role of synaptic plasticity and computation has been analyzed and reported on in numerous papers over the past 30 years. A review of feedforward synaptic mechanisms and their implications can be found in [2]. In this paper Abbott and Regehr state “The potential computational power of synapses is large because their basic signal transmission properties can be affected by the history of presynaptic and postsynaptic firing in so many different ways.” They also outline the basic function of a synapse as a signal filter as follows: Synapses with an initial low probability of release act as high pass filters through facilitation, while synapses with an initially high probability of release exhibit depression and subsequently serve as low pass filters. Intermediate cases, in which the synapse can act as a bandpass filter, exist. Furthermore, shortterm plasticity can influence the availability of postsynaptic receptors to bind neurotransmitter. For example, reducing neurotransmitter release probability can reduce postsynaptic receptor desensitization, effectively increasing the efficacy of synaptic transmission during highfrequency stimulation.
Other functional roles of shortterm synaptic plasticity can be found in [3]. There they discuss signal processing capabilities in the auditory system, visual processing in the retina, olfactory processing and even electrosensory processing in weakly electric fish. In the hippocampus facilitation of excitatory synapses combined with depression of inhibitory synapses can amplify highfrequency inputs, which selects for the highfrequency output seen in place cells. Inhibitory interneuron synapses in the hippocampus, such as the kind studied here, show a dynamic range of reaction when recruited via different pathways. In a related synopsis [4], the plasticity of inhibitory connections is studied in the context of spike timingdependent plasticity and network function. Taking network function with plasticity a step further, [5] analyzes simple neural networks in cortex with Hebbiantype learning rules affected by plasticity, modeling memory storage in networks with inhibitory synapses. Results are largely general and any conclusions drawn are not immediately applicable to actual brain function. Furthermore, the distinction between long and shortterm plasticity in this analysis is not made clear.
Synaptic depression as a regulator of synaptic transmission is discussed thoroughly in [6]. Here it is supposed that synaptic depression can keep synaptic efficacy constant relative to changes in probability of release. Also, depressing synapses can serve as a gain control in the face of rapidly firing presynaptic cells. It is also established that synaptic depression favors temporal encoding of information because the steady state is reached quickly and maintains no information as regards the absolute firing rate. However, these synapses are sensitive to a sudden change in firing rate, detecting rate changes in low and highfrequency inputs with similar sensitivity.
In cortex [7] draws the distinction between what they call synaptic vs. structural plasticity, focusing on structural plasticity, which directly effects the synaptic weights in a neural network model. They find that certain characteristics arise directly from the interaction of structural plasticity and synaptic plasticity rules. These characteristics in turn create a variety of stable synaptic weight distributions which could support information storage mechanisms. Neuromodulation can further alter the timedependent characteristics of a synapse. Acting on the presynaptic side, many neuromodulators reduce the probability of release, protecting the synapse from depletion and therefore extending the duration or frequency sensitivity of the synapse overall.
The calculation of various kinds of measures of information transfer at the synapse level has been explored in many papers. For instance, in [8] information transmission is studied through a master equation based stochastic model of presynaptic release of vesicles (which depends on intracellular calcium concentration), combined with a low dimensional model of membrane charging at the postsynaptic side. The model itself has an advantage over models of average quantities, in that it captures fluctuations of the dynamic variables. In [9] they consider synaptic transmission in neocortex, and compute the amount of information conveyed by a single response to a specific sequence of spike stimulation, as it is effected by shortterm synaptic plasticity. They determine that for any given dynamic synapse there is an optimal frequency of input stimulation for which the information transfer is maximal. A mathematical model of the calyx of Held was used in [10] to study synaptic depression due to repeated stimulation seen in vitro. They quantify the information contained in the postsynaptic current amplitude about preceding interspike intervals using a mutual information calculation. Both [9] and [10] have directly inspired the work we present in this paper. We add here that Tsodyks and Markram also included such dynamic synapses in a numerical network model [11].
Other work focuses on mutual information in spike trains, measuring the transfer of information between input spikes and output spike response, with the goal of addressing reliability in rate coding [12]. Another study [13] used in vivo spike trains recorded from monkeys to derive a synapse model with activitydependent depression, showing a decrease in redundancy from input to output spike train. Also concerning rate coding, in [14] a functional measure called the Synaptic Information Efficacy (SIE) is devised to measure mutual information between input and output spike trains in a noisy environment.
In [15] we parameterize a simple model of presynaptic plasticity from work by Lee and colleagues [16] with experimental data from cholinergic neuromodulation of GABAergic transmission in the hippocampus. The model is based upon a calciumdependent enhancement of probability of release and recovery of signaling resources. (For a review of these mechanisms see [17].) It is one of a long sequence of models developed from 1998 to the present, with notable contributions by Markram, [18], and Dittman, Kreitzer and Regehr [19]. The latter is a good exposition of the model as it pertains to various types of shortterm plasticity seen in the central nervous system, and the underlying dependence of the plasticity is based on physiologically relevant dynamics of calcium influx and decay within the presynaptic terminal. In our work, we use the Lee model to create a two dimensional discrete dynamical system in variables for calcium concentration in the presynaptic area and the fraction of sites that are ready to release neurotransmitter into the synaptic cleft. The map is parameterized with experimental results from paired wholecell recordings at CA1 PV basket cellpyramidal cell synapses. The PV basket cell was presented with current pulses at fixed frequencies, resulting in trains of presynaptic action potentials evoking GABA transmission onto the postsynaptic pyramidal cell. Synaptic transmission is manifested as trains of GABAA receptormediated inhibitory postsynaptic currents (IPSCs). Experiments were run in control and with muscarine added, which acts upon presynaptic muscarinic acetylcholine receptors (mAChRs) to cause a reduction in the observed IPSCs.
Various parameterizations and hidden parameter dependencies were investigated using Monte Carlo Markov Chain (MCMC) parameter estimation techniques. This analysis reveals that a frequency dependence of cholinergic modulation requires both calciumdependent recovery from depression and mAChRinduced inhibition of presynaptic calcium channels. A reduction in calcium entry into the presynaptic terminal in the kinetic model accounted for the frequencydependent effects of the mAChR activation.
We now use our model to investigate the information processing properties of this synapse, in control and neuromodulation conditions. We possess these two parameterizations; one from experiments in control conditions, the other from synapses undergoing activation of mAChR receptors by muscarine. Hence we can analyze the effect of this cholinergic modulation on information processing at the synapse level. Because network oscillations associated with learning and memory feature PC basket cells firing in the gamma range [20], we examine a range of frequencies from near zero to 100 Hz in what follows.
As mentioned previously, this analysis is motivated and guided by the work of Markram et al. in [9]. In this paper it is analyzed how much information as regards previous interspike intervals is contained in the size of a single response of a dynamic synapses, i.e. a synapse which is affected by the history of presynaptic activity. They derive expressions for the optimal frequency of the input Poisson spike train in terms of coding temporal information in depressing and facilitating neocortical synapses. It was found that depressing synapses are optimal in this sense at low firing rates (0.5–5 Hz) while facilitating synapses are optimal at higher rates (9–70 Hz). This is not surprising, given that the average postsynaptic response is larger at low frequencies in depressing synapses, and larger at higher frequencies in facilitating synapses. The more interesting result is the presence of a peak in the information transferred at a set frequency, an actual local maximum that is particular to each specific synapse. We examine how cholinergic neuromodulation (the addition of muscarine) effects this result in what follows, and an alternative way to measure the information transferred from a sequence of preceding interspike intervals.
FD Model
Many mathematical models have been developed to describe shortterm plasticity over the past 20 years [18, 19, 21, 22]. More flexible models include intracellular calcium dynamics [23], because probability of release is thought to be directly dependent upon calcium concentration in the presynaptic terminal. Recovery from synaptic depression is also thought to be accelerated by the presence of calcium, thereby unifying the underlying molecular mechanisms of facilitation and depression [19, 24–26].
In our model we take the probability of release (\(P_{\mathrm{rel}}\)) to be the fraction of a pool of synapses that will release a vesicle upon the arrival of an action potential at the terminal. Following the work of Lee et al. [16], we postulate that \(P_{\mathrm{rel}}\) increases monotonically as a function of calcium concentration in a sigmoidal fashion to asymptote at some \(P_{\mathrm{max}}\). The kinetics of the synaptotagmin1 receptors that binds the incoming calcium suggests a Hill equation with coefficient 4 for this function. The halfheight concentration value, K, and \(P_{\mathrm{max}}\) are parameters determined from the data.
After releasing vesicles upon stimulation, some portion of the pool of synapses will not be able to release vesicles again if stimulated within some time interval, i.e. they are in a refractory state. This causes “depression”; a monotonic decay of the amplitude of the response upon repeated stimulation. The rate of recovery from the refractory state is thought to depend on the calcium concentration in the presynaptic terminal [24, 25, 27]. Following Lee et al., [16], we assume a simple monotonic dependence of rate of recovery on calcium concentration, a Hill equation with coefficient of 1, starting at some \(k_{\mathrm{min}}\), increasing to \(k_{\mathrm{max}}\) asymptotically as the concentration increases, with a half height of \(K_{r}\). Muscarine, binding with muscarinic acetylcholine presynaptic receptors (mAChR), is thought to cause inhibition of presynaptic calcium channels, thereby decreasing the amount of calcium that floods the terminal when it receives an action potential [28].
An example of this process is seen in the set of experiments illustrated in Fig. 1. Here wholecell recordings were performed from synaptically connected pairs of neurons in mouse hippocampal slices from PVGFP mice [1]. The presynaptic neuron was a PV basket cell (BC) and the postsynaptic neuron was a CA1 pyramidal cell. Using short, 1–2 ms duration suprathreshold current steps to evoke action potentials in the PV BC from a resting potential of −60 mV and trains of 25 of action potentials are evoked at 5, 50, and 100 Hz from the presynaptic basket cell. The result in the postsynaptic neuron is the activation of \(\mathit{GABA}_{A}\)mediated inhibitory postsynaptic currents (IPSCs). Upon repetitive stimulation, the amplitude of the synaptically evoked IPSC declines to a steadystate level. These experiments were conducted with 5, 50 and 100 Hz stimulation pulse trains, in order to test frequencydependent shortterm plasticity effects. We note that oscillations in neural networks in the “gamma” range are associated with learning and memory, for a review see [20]. Muscarine activates presynaptic metabotropic/muscarinic acetylcholine receptors (mAcHRs) which cause a reduction in the response overall, and subsequently the amount of depression in the train.
The peak of the measured postsynaptic IPSC is presumed to be proportional to the total number of synapses that receive stimulation \(N_{{\mathrm{tot}}}\), which are also ready to release (\(R_{\mathrm{rel}}\)), e.g. \(N_{{\mathrm{tot}}} R_{\mathrm{rel}}\), multiplied by the probability of release \(P_{\mathrm{rel}}\). That is, the peak \(\mathrm{IPSC}\sim N_{{\mathrm{tot}}} R_{\mathrm{rel}} P_{\mathrm{rel}}\). \(P_{\mathrm{rel}}\) and \(R_{\mathrm{rel}}\) are both fractions of the total, and thus range between 0 and 1. Without loss of generality, we consider the peak IPSC to be proportional to \(R_{\mathrm{rel}} P_{\mathrm{rel}}\).
The presynaptic calcium concentration itself, \([\mathit{Ca}]\), is assumed to follow first order decay kinetics to a base concentration, \([\mathit{Ca}]_{{\mathrm{base}}}\). At this point we choose that \([\mathit{Ca}]_{{\mathrm{base}}}=0\), since locally (near the synaptotagmin1 receptors) the concentration of calcium will be quite low in the absence of an action potential. The evolution equation for \([\mathit{Ca}]\) then is simply \(\tau_{\mathrm{ca}} \frac{d [\mathit{Ca}]}{dt}=  [\mathit{Ca}]\) where \(\tau_{\mathrm{ca}}\) is the calcium decay time constant, measured in ms. Upon pulse stimulation, presynaptic voltagegated calcium channels open, and the concentration of calcium at the terminal increases rapidly by an amount \(\delta\text{ (measured in }\upmu\text{m)} : [\mathit{Ca}]\rightarrow [\mathit{Ca}]+\delta\) at the time of the pulse. Note that calcium buildup is possible over a train of pulses if the decay time is long enough relative to the interpulse interval.
We nondimensionalize the calcium concentration, rescaling it by the value of δ in the control case, \(\delta_{c}\), and defining \(C=\frac {[\mathit{Ca}]}{\delta_{c}}\). After a stimulus occurring at a time \(t=0\), which results in an increase in C by an amount \(\Delta=\frac{\delta}{\delta _{c}}\), the concentration of calcium is \(C=C_{0} e^{t/\tau_{\mathrm{ca}}}+\Delta\). In the control case this further simplifies to \(C=C_{0} e^{t/\tau_{\mathrm{ca}}}+ 1\).
As mentioned previously, the probability of release \(P_{\mathrm{rel}}\) and the rate of recovery, \(k_{{\mathrm{recov}}}\), depend monotonically on the instantaneous calcium concentration via Hill equations with coefficients of 4 and 1, respectively; e.g. \(P_{\mathrm{rel}}= P_{\mathrm{max}} \frac{C^{4}}{C^{4}+K^{4}}\), and \(k_{{\mathrm{recov}}} = k_{\mathrm{min}} + \Delta k \frac{C}{C+K_{r}}\), where \(\Delta k = k_{\mathrm{max}}k_{\mathrm{min}}\). The variable \(R_{\mathrm{rel}}\) is governed by the ordinary differential equation \(\frac{d R_{\mathrm{rel}}}{dt}=k_{{\mathrm{recov}}}(1R_{\mathrm{rel}})\), which can be solved exactly for \(R_{\mathrm{rel}}(t)\). \(R_{\mathrm{rel}}(t)=1(1R_{0}) (\frac{C_{0} e^{t/\tau_{\mathrm{ca}}} + K_{r}}{K_{r} +C_{0}})^{\Delta k} e^{k_{\mathrm{min}} t}\). \(P_{\mathrm{rel}}\) is also a function of time as it follows the concentration of calcium after a stimulus.
We are interested in capturing the peak value of the IPSC (or Pr), so we construct a discrete dynamical system (or “map”) that describes \(P_{\mathrm{rel}} R_{\mathrm{rel}}\) upon repetitive stimulation. Given an interspike interval of T, the calcium concentration after a stimulus is \(C(T) + \Delta\), and the peak IPSC is proportional to \(P_{\mathrm{rel}}(T) R_{\mathrm{rel}}(T)\), which depends upon C. After the release, \(R_{\mathrm{rel}}\) is reduced to the fraction of synapses that fired, e.g. \(R_{\mathrm{rel}} \rightarrow R_{\mathrm{rel}}  P_{\mathrm{rel}} R_{\mathrm{rel}}= R_{\mathrm{rel}}(1P_{\mathrm{rel}})\). This value is used as the initial condition in the solution to the ODE for \(R_{\mathrm{rel}}(t)\). A two dimensional map (in C and R) from one peak value to the next is thus constructed. To simplify the formulas we let \(P=P_{\mathrm{rel}}\) and \(R=R_{\mathrm{rel}}\). The map is
The peak value upon the nth stimulus is \(\mathit{Pr}_{n}=R_{n} P_{n}\), where \(R_{n}\) is the value of the reserve pool before the release reduces it to the fraction \((1P_{n})\).
Parameter Estimation
The parameter values for the model are summarized in Table 1. The rescaled data presented in a previous section were fitted to the map using the Matlab package lsqnonlin, and with MCMC techniques ([29, 30]). The value of \(P_{\mathrm{max}}\) was determined by variancemean analysis, and it is 0.85 for the control data and 0.27 for the muscarine data. The common fitted parameter values for both data sets are shown in Table 2.
The control data set was assigned \(\Delta= 1\) which can be done without loss of generality if the concentration of calcium is scaled by that amount, and the muscarine data set has the fitted value of \(\Delta = 0.17\). From this result it is clear that the size of the spike in calcium during a stimulation event must be much reduced to fit the data from the muscarine experiments. This is in accordance with the idea that mAChR activation reduces calcium ion influx at the terminal.
Discussion of the Model
It is common in the experimental literature to classify a synapse as being “depressing” or “facilitating”, depending upon its response to a pulse train at some relevant frequency. Simple models have been built that create each effect individually. The model here (developed originally by Lee [16], recall) combines both mechanisms through the introduction of the calcium variable. Depending upon the parameters, both facilitation and depression and a mixture of the two can be represented, as is shown in Lee [16]. Note that facilitation is built into this model through the calciumdependent P value and rate of recovery.
This interplay of the presynaptic probability of release and the rate of the recovery creates a nonlinear filter of an incoming stimulus train. Adding muscarine modifies the properties of this filter. To investigate this idea, we consider the distribution of values of Pr created by exponentially distributed random ISIs for varying rates λ, or mean ISI, denoted \(\langle T\rangle=1/\lambda\). Doing so explores the filtering properties of the synapse when presented with a Poisson process spike train. To develop our intuition, we first present results from numerical studies to determine of the effect of varying frequency of the pulse train, or mean rate, on the muscarine and control cases. Then we create analytic expressions for the distribution of calcium and the Pr values, which are corroborated by the numerical results. Finally, the information processing properties of the synapse in the control and the muscarine cases at physiological frequencies are compared.
Numerical Study of the Distributions in Pr
To create approximations to the distribution of Pr values we computed 2^{14} samples from the stochastic map, after discarding a brief initial transient. The values, ranging between 0 and 1, were placed into evenly spaced bins. The histograms, normalized to be frequency distributions, were computed for a range of mean frequencies (or rates) in the theta range, gamma range and higher (nonphysiological, for comparison). The parameters used in the following simulations are from fitting the model to the control and muscarine data set (see Table 2).
Frequency Distributions of Pr in Control and Muscarine Cases
There is a similar evolution of the frequency distribution for Pr as the rate of the incoming Poisson spike train is increased, in both cases. For very small rates (between almost 0 and 1 Hz) the distribution is peaked near \(P_{\mathrm{max}}\). The peak is necessarily skewed to the left, as it is restricted to the support \([0,1]\) and the exponential distribution of ISIs always contributes some very small values, which generate lower Pr values. For very large rates (nonphysical, 200 Hz and larger) the distribution is peaked near 0, reflecting the fact that the synapse does not have time to recover between spikes. Again the peak is skewed, this time to the right, necessarily. In between the two extremes the distribution must spread out between 0 and 1, and it does so in a very particular way.
As the rate is increased from 0.5 to 10 Hz the distribution spreads quickly out over the entire interval; see Fig. 2. This might be expected, since the range of frequencies present in the exponential distribution will cause a wide range of Pr responses at these lower mean firing rate values. First the skewness to the left becomes more pronounced, then the left half of the distribution grows up to match the right, becoming pretty much flat at around 1.8 Hz. It then begins to drop on the right side, becoming almost triangular around 3 Hz. From there the peak on the left sharpens, while maintaining a shoulder for the very lowest values of Pr. Note that this covers the range of rates generally thought to be theta frequency. Here the synapse could be said to be the most sensitive and allow for widely varying responses.
For rates larger than 10 Hz the peak on the left sharpens as the rate is increased, but very slowly. The shoulder on the left of the peak remains in place. In contrast with the theta range, the gamma range is marked by a nearly steady response. Having these two distinct “tunings” of the synapse at physiological frequencies is quite interesting. For even larger frequencies the shoulder on the left grows up to meet the peak (data not shown). The peak near 0.1 persists until the rate is greater than 300 Hz, after which it is subsumed into the peak near zero. The synaptic dynamics thus creates a small probability response that is stable over a wide range of frequencies.
As mentioned previously, for the control case the influx of calcium (Δ) was set to unity, because the variable in the map for calcium was rescaled by this amount. In order to fit the muscarine data a much smaller value of (Δ) was needed (0.17). This is consistent with the hypothesis that muscarine shuts down the influx of calcium ions to the presynaptic cell. This reduces the size of the response, but it was also shown to reduce the relative amount of depression seen with repeated spikes, at least at intermediate frequencies around gamma. This could have important implications for the effect of neuromodulation at these frequencies. How is this manifested in the Pr distributions? In Fig. 3 are shown the Pr frequency distributions for varying mean rate. The range on the xaxis is set to \([0,0.3]\) because \(P_{\mathrm{max}} = 0.27\). With this rescaling the distributions look much like those in the control case: The distribution is skewed left with a peak near \(P_{\mathrm{max}}\) for low frequencies, and shifts to have a peak in the middle of the range for intermediate frequencies while spreading out. The peak in the middle of the range gradually moves to the left as the frequency is increased further. In the control case the peak is more triangular skewed right than in the muscarine case at low frequencies, at gamma frequencies the control distribution has a shoulder on the left, while the muscarine case distribution is more symmetrical. Therefore the muscarine treated synapse focuses the response in a narrow range around a small Pr for all but the lowest frequencies. At high frequencies the peak of the muscarine distribution does not go to zero as fast as the control distribution does, a somewhat nonintuitive result, though we must point out that this range of frequencies is not physiological.
In both cases the dynamics of the map create a low pass filter, which is the hallmark of a depressing synapse. The midrange peak makes it something more complex than a linear low pass filter, as it has a typical response size (the peak or the mean value) for each frequency. However, it shows an interesting uniformity: the peak (or most likely) response is fairly insensitive to changes in frequency over the physiological range of 3–70 Hz.
We can then compare the mean and the peak of the distributions as the input frequency is varied. See Fig. 4. As expected, the mean and the peak of the muscarine distributions are much lower than that of the control, except in the case of the peak within the theta range. This could mean that even under depression caused by pharmaceutical applications the synapse response remains consistent, a kind of builtin stability to theta frequencies. We also note that the amount of depression relative to the initial response of the synapse is less for the muscarine case than the control: the mean ranges from over 0.8 to between 0.1 and 0.2 for the control case, a change of about 0.60.7 overall, and from 0.3 to near 0.1 in the muscarine case, a much smaller change of about 0.2. This confirms the assertion in [1].
Entropy of Pr Distribution vs. Mean Rate
The entropy of a distribution is a convenient scalar measure for comparing the overall structure in distributions as a parameter is varied. We therefore compute the entropy of the Pr distributions in the control and muscarine case for varying mean rate and compare it with the distributions themselves. Finally, we draw some physiological conclusions.
The entropy of a distribution, \(p(x)\), of a random variable, X, is defined to be
It measures the number of states the variable can assume, along with the probability of each state. Note that for continuous random variables it is necessarily dependent upon the exact partition of the distribution into a histogram of values that can be summed. In what follows we keep this partition constant across different cases, comparing the entropies of the resulting histograms. All other things being equal, the entropy of a distribution with more “spread” is higher than one that is more focused on a single value. If a random variable is tightly constrained, its distribution will have a lower entropy, i.e. it is much more certain what state the variable will be in for any given sample.
In Fig. 5 we plot the entropy as a function of the mean rate of the driving exponential distribution for both control and muscarine parameter sets. Figure 5(b) shows a range of frequencies from near zero to 1000 Hz to see the limit for large frequencies. Figure 5(a) shows a zoom in on the physiological range of frequencies between zero and 100 Hz.
Studying the entropy vs. frequency from 0.1 to 100 Hz, we see a local maximum for low values of frequency (at approximately 4 Hz). In this frequency range the distribution spreads out between a peak at high Pr values to a peak at lower Pr values. At these frequencies there would be a large variability in the size of the response, which could be a useful characteristic in a communication channel. However, if the criterion is a stable level of connectivity, lower entropy is desirable. For larger frequencies (not physiological) the entropy increases again to a local maximum near 260 Hz, after which it decays as the distribution narrows near almost zero Pr values. For the muscarine case the second local maximum is less pronounced and occurs near 388 Hz. The maximum in both cases is the result of the distribution shifting from one peaked near \(\mathit{Pr} > 0\) to one peaked at zero skewed to the right. This occurs through the widening out of the peak, causing the increase in entropy.
Note that the size of response (Pr) is a measure of the “strength” of the synaptic connection created by the “pool” of synapses. The advantage of having a peaked distribution with low entropy over a range of frequencies is that a stable connection is created, even when presented with a stochastic signal of the Poisson type. Alternatively, when the distribution is switching from a peak at high Pr values to low, as the mean rate is increased, the entropy is at a maximum and there is a greater range of coupling strengths. In that case the exact value of the strength of the connection will depend on the past synaptic signaling history.
Stochastic Recurrence Relationships for Calcium and Pr
The map for C, P and R driven by a Poisson spike train is a stochastic recurrence relation with noise added in an exponent. We shall show that while it is not possible to create a closed form for the complete map, an approximation can be created that relies on the properties of the deterministic map for low input rates. Only the map for the calcium concentration allows for direct analysis, but it involves the introduction of a random variable for Δ. We provide an outline of the work required to create this, for completeness.
Calcium Concentration
Suppose an independent, identically distributed increase in the amount of calcium \(\{\Delta_{n}\}_{n\geq1}\) occurs at times \(\{t_{n}\}_{n\geq1}\) and we wish to find the distribution of the concentration of calcium accumulated in the presynaptic terminal following this supposition.
The concentration is governed by a random recurrence equation given by
where \(A_{n}= e^{T_{n}/\tau_{\mathrm{ca}}}\), and the waiting times \(T_{1}=t_{1}\), \(T_{n} = t_{n}t_{n1}\), \(n\geq2\), are independent and identically distributed, i.i.d., making \(\{T_{n}\}\) a renewal process. Moreover, \((A_{n},\Delta_{n} )\) are assumed to be i.i.d. vectors with initial condition \(C(0)=C_{0}\). \(C_{0}\) is a base concentration of calcium which is considered to be zero in the absence of a stimulus. After some manipulation it can be shown that the concentration of calcium follows a gamma distribution with a shape parameter \(\lambda\tau _{\mathrm{ca}}+1\) and a scale parameter 1. Thus,
where c is a realization of random variable C. The distribution of calcium concentration C has mean around \(\lambda\tau_{\mathrm{ca}}+1\). The coefficient of variation of C is \(1/(\sqrt{\lambda\tau_{\mathrm{ca}}+1})\), and hence, the shape of distribution depends on the value of \(\lambda\tau _{\mathrm{ca}}+1\). The details of the derivation can be found in Appendix B.
We compare this result with the distribution obtained by numerical simulation of the recurrence relation for C by creating quantile plots. Figure 6 displays quantile plots for the map with input frequencies 0.1, 6, 7, 25, 50, 100 Hz, with the theoretical quantiles based upon the gamma distribution. This type of graphical display shows whether the simulated data can reasonably be described by a gamma distribution. Plots show adherence to a linear relationship between the observed and theoretical quantiles, confirming our analytic results.
Distribution of Pr for Small \(\tau_{\mathrm{ca}}\) and Large T
We conclude from the previous subsection that the random variable describing the calcium concentration does have a parametric distribution. However, this is not the case for the variable R due to the complexity of the map, and so a closed form for the distribution of \(\mathit{Pr} = P R\) is not possible. However, we can understand it partially by considering the mechanisms involved. We can also use some information from the deterministic map. The map has a single attracting fixed point, and the collapse to this fixed point from physiological initial conditions is very rapid [15]. The value of the fixed point depends on the frequency, with a smaller value for larger frequency in general. In Fig. 7 we plot the expression for the fixed point of the deterministic map vs. rate, along with the mean of the distribution of Pr for varying frequencies. The values decrease with increasing frequency, as expected, and are remarkably close.
This motivates the idea that if the Pr value was directly determined by the fixed point value for the ISI value preceding it, we would be able to convert the distribution of the ISIs into that of the Prs by using composition rules for distributions of random variables. We examine this when the calcium decay time (\(\tau_{\mathrm{ca}}\)) is notably smaller than the interspike interval (T). With this approximation C, P have time in between pulses to decay to their steadystate value before another pulse. This means that the fixed point value for a rate given by \(1/T\) where T is the preceding interspike interval is more likely to give a good estimate of the actual value or \(\mathit{Pr} = P R\).
It was shown in [15] that in this case \(\overline {C}\rightarrow{\Delta}\) as T increases and hence \(\overline {P}\rightarrow{P_{\mathrm{max}}}\). Therefore, the fixed point R̅ is then
From this we can compute the probability density function of R̅, given an Exponential distribution of the variable T. Details of this calculation are provided in Appendix C. For ease of notation we let \(X = \overline{R}\) and \(Y=\overline{PR}\) be random variables, then an analytic expression for probability density function (PDF) of X exists and is given by
where \(a=1P_{\mathrm{max}}\), \(\lambda>0\) is the mean Poisson rate and \(k_{\mathrm{min}}>0\) is the baseline recovery rate. The distribution is supported on the interval \([0,1]\).
From Eq. (3), the mean or expected value of random variable \(X=\overline{R}\) is given by
where \({}_{2}F_{1} (1,\frac{k_{\mathrm{min}}+\lambda}{k_{\mathrm{min}}};2+\frac{\lambda }{k_{\mathrm{min}}};a )\) is the hypergeometric function.
Similarly, we can compute the analytical expression of the probability density function of fixed point \(Y=\overline{PR}\). We will refer to this in what follows as the stochastic fixed point. Hence, the probability density function for the stochastic fixed point is
This distribution is supported on the interval \([0,P_{\mathrm{max}}]\). Figure 8 shows this expression for different mean input interspike interval, in ms.
In Fig. 9 are histograms of Pr values obtained from the map with very small \(\tau_{\mathrm{ca}}\), and other parameters from the control set, as in Fig. 8. The similarity between the two is evident. Apparently this approximation captures not only the mean value of the numerical distribution, but also the shape of the distribution and how it changes with varying input spike train rate.
Figure 10 compares these two distributions via quantile–quantile plots, for mean ISI of 10, 50, 100, 120, 330, and 2000 ms. In every QQplot the quantiles of all \(\overline {PR}\) are plotted against the quantiles of all Pr values. If the values of the two different data sets have the same distribution, the points in the plot should form a straight line. From these plots it is clear that when the mean ISI is significantly larger than calcium decay time, the distribution of the stochastic fixed point is similar to that of Pr. However, for smaller mean \(\mbox{ISI}<10\mbox{ ms}\) the approximation becomes less exact, as does the similarity between the two distributions.
The Stochastic Model of the Postsynaptic Response
To compute mutual information between the stimulus and the response of a synapse we must capture the variability in the postsynaptic response. A certain probability of release will generate a distribution of responses that has an analytical description given certain fundamental assumptions about stochastic processes. We provide a short overview of these here.
Model Description
Noise is a fundamental constraint to information transmission, and such variability is inherent in individual neurons in the nervous system. To account for the variability across identical stimulation trials, we use stochastic model of the synapse. Here, we follow the Katz model of neurotransmitter release; see [31] and [9].
Upon the arrival of an action potential at a presynaptic terminal, calcium influx through calcium channels causes some of the synaptic vesicles to fuse to the terminal bouton membrane at special release sites and neurotransmitter diffuses across the synaptic cleft. Each site can release either one or zero vesicles and we assume there are N release sites. Each release event is independent, and we assume that release of K vesicles from N release sites follows a binomial distribution with two parameters \((N, \mathit{Pr})\) and is written
Pr is the release probability for each release site following an action potential obtained from the map. Note that failure of release results in a zero amplitude response from the postsynaptic neuron, so it cannot be informative. In addition, there is not only variability in the number of sites activated and hence the number of vesicles actually released, but also in the postsynaptic response to a single vesicle, due to inherent stochasticity in the postsynaptic receptors. Therefore, we assume that the size of the postsynaptic response (\(\mathcal{S}_{\mathrm{resp}}\)) at the time of spike is not a constant, but instead it follows a normal distribution with mean μ and variance \(\sigma^{2}\), with a twosided truncation which is written
Therefore, the amplitude of postsynaptic response following each action potential is obtained by combining the binomial model of vesicle release with the Normal model of a single response. The summation of responses evoked by each release can be written
Note that, for \(K>0\), \(\mathcal{S}\sim N(k\mu,\sqrt{k}\sigma)\). The probability density of the postsynaptic response to release of single vesicle is thus
where
We will use this formulation in what follows.
Mutual Information Calculations
In this section we apply informationtheoretic measures introduced by Shannon [32] to quantify the ability of synapses to transmit information. One important concept in information theory is mutual information. It measures the expected reduction in uncertainty about x that results from learning y, or vice versa. This quantity can be formulated
where the entropy was defined earlier and
is the joint entropy of two random variables X and Y quantifies the uncertainty of their joint distribution. Using Eqs. (1) and (6), the mutual information can be rewritten
The mutual information is symmetric in the variables X and Y, \(I(X;Y)=I(Y;X)\), and is zero if the random variables are independent. Note that if the relation between them is deterministic, it is equal to the entropy in either variable. In order to compute this from data one is faced with the basic statistical problem of estimating the entropies: selecting the correct bin width in the histograms of the random variables. Here, we use the Freedman–Diaconis rule [33] for finding the optimal number of bins. According to this rule, the optimal number of bins can be calculated based on the interquartile range (\(\mathit{Iqr} = Q_{3}Q_{1}\)) and the number of data points n. It is defined as
One of the advantages of the Freedman–Diaconis rule is that the variability in the data is taken into account using Iqr, which is resistant to outliers in the data.
We stimulated synapses with input Poisson spike trains with different mean frequencies and computed the mutual information between the postsynaptic response and the preceding interspike interval. The mutual information then is obtained by
Figure 11 shows the result of this calculation for the control and muscarine cases. In both we observe a rapid decline in mutual information at frequencies above 7 Hz. The peak mutual information occurs between 0.1 to 2 Hz, all of which demonstrate the frequency dependence of temporal information encoding. Large interspike intervals (low frequency) allow enough time for recovery of all release sites, leading to a tight distribution about \(\mathit{Pr}=P_{\mathrm{max}}\), and very little communication of information. Very short interspike intervals give no time for recovery and Pr is confined to a small region near 0, also leading to low information transmission. Between these extremes, variable interspike intervals have a significant effect on the amplitude of Pr. The main difference between the muscarine and control cases is the absolute value of the mutual information, which is significantly lower in muscarine case. This can be understood by considering that the range of Pr values in the muscarine case is much smaller, so variations in Pr are less distinguishable, and contribute less to the mutual information.
This coincides with results in [9], where depressing synapses are found to have a peak mutual information at very low frequencies. However, here we also see this in the distributions themselves, where the amount of variability is represented by the overall width or entropy of the distribution. It is not surprising that the mutual information will reflect this, especially considering that these entropies are the foundation of the calculation.
Information “Stored” in the Postsynaptic Response
We showed that a distribution of postsynaptic responses (PSRs) carries information about the distribution of the ISIs (interspike intervals). However, the exact sequence of ISIs determines a given postsynaptic response. The length of the sequence that effects a response will depend on the parameters of the model, in that the calcium can accumulate in certain situations. We call this a kind of “memory” for an exact sequence of ISIs. We measure this by computing the mutual information between a prior sequence and the PSR. This can be done in a coarse grained way by adding up a sum of N previous ISIs and using that as a random variable, which we consider first. This, however, ignores the subtleties of the exact sequence; for instance, a long ISI followed by a short ISI will give a PSR that is smaller than the reverse, given that the sum of the two is the same. A more complete approach is to compute the mutual information between a PSR and an Ntuple of preceding spikes, in order. The former method has been used in previous work so we begin with that approach to demonstrate some of these aforementioned subtleties.
MI Between PSR and Sum of Preceding ISIs
In [9], one computes the mutual information between the postsynaptic responses a sum of the preceding ISIs, increasing the number in the sum to go further back in time. Assuming that first spike occur at \(t_{0}=0\) ms, this can be formulated as follows. Let \(t_{1}=T_{1}\), \(t_{2}=T_{1}+T_{2}\), … , \(t_{k}=T_{1}+\cdots+T_{k}\) be a vector of the sums of the preceding ISIs. It can be simplified by
where \(M>0\) is a natural number. The mutual information between the postsynaptic response and the sum of preceding spike ISIs is then
Figures 12 and 13 illustrate the results for both the control and the muscarine cases, at 5 and 50 Hz, respectively. The information contained in the sum is significantly lower for muscarine compared to the control case, which is not surprising, given the results from the preceding section. For both firing rates, 5 and 50 Hz, in the control case the MI decreases as more ISIs are added to the sum. The further back in time the sum goes, the less the ISIs in the sum are directly involved in determining the PSR. The MI curve becomes almost flat past five preceding ISIs for 50 Hz, indicating an effect that can be measured only that far back. In the control 5 Hz case the decay begins to level off at \(N=5\) as well, but continues to decline for \(N>5\). This shows a frequency dependence of the measure that makes sense mechanistically. The muscarine MI is much less dependent on the cumulative history of spikes, showing very little change as N is increased. This also makes sense mechanistically, because in the muscarine case the response range is very narrow, and cannot carry much information forward from one preceding ISI, let alone any ISIs preceding that.
Mutual Information (MI) Between Postsynaptic Response \(\mathcal{S}\) and nTuple Interspike Intervals \(T_{1},T_{2},\ldots,T_{n}\)
Our second method is much more computationally intense, but preserves the exact structure of the sequence of preceding ISIs.
Consider a singleinput and singleoutput channel with interspike input T and postsynaptic response output \(\mathcal{S}\). The mutual information between T and \(\mathcal{S}\) in this notation is defined
Consider now a channel with two interspike interval inputs \(T_{1}\) and \(T_{2}\) and a single output \(\mathcal{S}\). The mutual information between the inputs and the output of twoway channel is commonly known as the 2tuple information and can be defined as the amount of reduction of uncertainty in \(\mathcal{S}\) knowing \((T_{1},T_{2})\). The 2tuple mutual information is written
Following McGill [34] we can extend the definition for mutual information to include more than two sources \(T_{1}, T_{2},\ldots ,T_{n}\) that transmit to \(\mathcal{S}\), arriving at
The histogram construction of probability density functions in higher dimensions suffers from the “curse of dimensionality” [35]. Kozachenko and Leonenko [36] instead proposed an improved nonparametric estimator for entropy: the KL estimator based on knearest neighbor distances. In [37], Kraskov et al. reported on a modified KL estimator to compute mutual information with improved performance and applicability in higher dimensional spaces (the result is referred to as the KSG algorithm). Compared to other methods for multivariate data sets, estimators obtained by the KSG algorithm have minimal bias when applied to data sets of limited size, as is common in real world problems.
The KL estimator demonstrates that it is not necessary to estimate the probability density function in order to compute information theoretic functionals. Instead, the estimator is based on the metric properties of nearest neighbor balls in both the joint and the marginal spaces. The general form of the KL estimator for the differential entropy is written as
where \(\psi:\mathbb{N}\rightarrow\mathbb{R}\) denotes the digamma function, \(\epsilon(i)\) is twice the distance from point \(x_{i}\) to its kth neighbor, d is the dimension of x and \(c_{d}\) is the volume of the ddimensional unit ball. Similarly, KL estimator for the joint entropy is written as
Therefore, for any fixed k, the mutual information can be computed by subtracting the joint entropy estimate from \(H(X)\) and \(H(Y)\). However, this introduces biases due to the different distance scales in different dimensions. The KSG algorithm corrects this by instituting a random choice of the nearest neighbor parameter k. For a more detailed derivation see [37].
We now employ it to estimate mutual information between the probability of release (Pr) as single output and preceding ISIs as a multivariate input. Note that we are able to use the deterministic Pr distributions in these calculations because Pr is not directly dependent upon the ntuple of the preceding ISIs.
Figure 14 shows the increase in mutual information (or reduction in uncertainty) between the Pr and an ntuple ISI, with increasing n. Mean rates in the gamma and theta ranges, 5 and 50 Hz, respectively, are plotted for the control and muscarine cases. At 5 Hz in the control case the mutual information increases from 1 to 2 preceding ISIs, after which it decreases, tending toward some limiting value. Because the MI is still greater than for one ISI for three preceding ISIs, we could say that this synapse “remembers” up to three preceding ISIs. For the muscarine case the increase happens over the first four preceding ISIs, meaning the uncertainty is reduced by adding in previous ISIs, and in this parameter regime the synapse “remembers” somewhat further back in the ISI train than in the control case. The mutual information is even greater in muscarine than control for more than 4 ISIs at 50 Hz. This is not at all obvious from the other results we have presented in this paper, and could not be uncovered without these statistics on sequences of ISIs. At 50 Hz the muscarine MI is always smaller than the control MI, but we see almost the same history dependence for each synapseindependent of frequency.
Discussion and Conclusion
When presented with a Poisson spike train input, our depressing synapse model acts as a nonlinear filter of the exponential distribution of interspike intervals. For high and low input frequencies, the result is as expected, the output Pr distribution is peaked at values near zero and \(P_{\mathrm{max}}\), respectively, with very small variance (see Fig. 2). In between, for frequencies near theta, the distribution is spread across the entire interval between zero and \(P_{\mathrm{max}}\). This creates a peak in the entropy of the distribution near this value, which demonstrates a wide dynamic range in the response of the synapse (Fig. 5). The range in the control case is larger than in muscarine, necessarily, because \(P_{\mathrm{max}}\) is larger.
Over the gamma frequency range we see that the peak and mean of the distribution remain more or less constant, indicating a stable response size when presented with a Poisson spike train. This would create a stable connection and is the advantage of having a peaked distribution with low entropy. The addition of muscarine reduces the strength of this connection.
These are the results of a numerical investigation of the properties of the synapse. Creating closed form expressions for these distributions is not possible, the map is too complex. However, the stochastic recurrence relation for the calcium concentration alone is simple enough to be analyzed and we discovered that it follows a gamma distribution with shape parameter \(\lambda\tau_{\mathrm{ca}} +1\). For the Pr distribution we had to rely on an approximation motivated by numerical results. We found the mean of the distribution followed the frequency in the same way that the fixed point of the map did. The collapse to the fixed point is very rapid, one or two iterations at most, which justifies our assumption that the Pr values are determined by the fixed point value associated with the instantaneous rate associated with the preceding interspike interval, e.g. \(\lambda= T\). Then the formula for the fixed point as a function of frequency could be used to generate a distribution, using the exponential distribution of the Ts. The results confirm the validity of this approximation, even in ranges outside the other approximations necessary to arrive at a closed form. Most importantly, the “sloshing” of the distribution between zero and \(P_{\mathrm{max}}\) as the frequency is decreased through the physiological range is captured by this form.
For a given train of interspike intervals the map for \(\mathit{Pr} = P R\) is completely deterministic, it captures the mean of the response behavior. Therefore the mutual information between the Pr distribution and the ISI distribution should be equal to the entropy of Pr. In order to introduce stochasticity to the response we modeled the noise in the release of vesicles of neurotransmitter as a binomial process, and the noisy postsynaptic response to the vesicles with a truncated Normal distribution. With a distribution of the resulting PSR values, we calculated mutual information. The mutual information as a function of firing rate was found to have a peak around 3 Hz for both the muscarine and the control cases, though the overall mutual information was much lower for muscarine, in part because of the lower entropy of the Pr distribution in this case (Fig. 11). The peak in the mutual information in the control case is 6 times that of the baseline value for large firing rates. For the muscarine case it was only about 1.25 times larger. Therefore the synapses with muscarine added as a neuromodulator are much less sensitive to changes in frequency overall. This can be viewed as a good thing with regard to stability of the response, but creates a much less dynamic filter than in the control case.
For comparison, we note here that in [38] they determine that changes in response amplitude with presynaptic shortterm plasticity are balanced out by postsynaptic conductance noise at higher firing rates. Interestingly, at lower rates, the effects of shortterm plasticity are more evident. For our experimentally fit synaptic model, maximum entropy and maximum mutual information occur a very low frequencies, around 3–5 Hz. This means that these effects would not be washed out by fluctuations in membrane conductance on the postsynaptic side. Additionally, Rotman et al. [39], in a rate coding type study, conclude that depressing interneuron synapses in hippocampus possess optimal information transfer characteristics at lower frequencies, indeed, single spikes. Finally, it was found in [40] that models that include presynaptic depression and stochastic variation in vesicle release have lower information transfer properties at lower frequencies than high frequencies, while their model without stochasticity showed no such frequency dependence. Clearly the effects of shortterm plasticity on information transfer in these different studies need to be compared very carefully to arrive at definitive conclusions.
We then took these calculations a step further by attempting to determine how far back in a spike train the synapses “remembers”, in that information from previous ISIs is carried forward into a certain measurement of the Prs. This is a nontrivial problem, with technical challenges in the calculations. We performed a mutual information calculation with a sum of the previous ISIs, successively adding more times. This sum collapses much of the information in the sequence, measuring only if it was overall a long time or a short time, compared to other samples. Some results are obtainable, however, and we saw an overall decline in mutual information as more ISIs were added to the sum, at least in the control case (Figs. 12 and 13). In the muscarine case the mutual information is seemingly insensitive to adding in more ISIs, being relatively flat. The absolute value of the mutual information was much less in the two cases for 50 Hz than for 5 Hz. Again, the larger entropy of the Pr distribution at 5 Hz creates the potential for larger mutual information. We did see that the MI leveled out more slowly at 5 Hz than 50 Hz, too, for the control case, indicating a longer “memory” at 5 Hz than at 50 Hz.
In an attempt to keep all the structure in a preceding interspike interval sequence we used a multivariate version of the mutual information calculation. Because of the high dimensionality of the data this is more challenging. A knearest neighbor approach to estimating the entropy is appropriate in this case and we used the KSG algorithm to do the calculation. This measures the mutual information between the response and the sequence of ISIs as the number in the sequence is increased. If it increases with more ISIs in the sequence we can assume that the reduction of uncertainty is greater as longer histories are considered. It is seen to increase initially and then decrease with history length as the memory effect is washed out (Fig. 14). Somewhat nonintuitively the muscarine case increases over longer histories than in control, though the overall MI is again smaller. Adding up to 4 ISIs to the sequence improves the prediction of the Pr vs. two–three ISIs in control. This result is more or less the same at gamma and theta frequencies. Using the sum of the ISIs in the mutual information calculation hides this dependence in the muscarine case, as anticipated.
Through this analysis we have gained insight into the information processing characteristics of PV BCs. Within a physiological context, PV BCs receive synaptic input from intrinsic and extrinsic sources, effectively tuning them to fire specifically at theta frequency [41]. It is clear from analysis that firing of PV BCs at theta frequency optimizes the information content of PV BC to pyramidal cell synaptic transmission. Moreover, PV BCs in vivo often burst more than one spike per theta cycle [41]. This short “burst” of more than one action potential in vivo serves to further optimize the information content, providing a stronger “memory” of PV BC activity.
Cholinergic neuromodulation of PV BCs occurs both pre and postsynaptically ([1, 42]) and is associated with the generation of populationlevel gamma rhythms. By reducing Pr and protecting the synapse from vesicular depletion, we now show that presynaptic neuromodulation reduces entropy and mutual information. While this effect may reduce the information content of the synapse, it will promote stability and regularity of the synaptic response that might be advantageous in the generation of populationlevel gamma rhythms. Therefore, we hypothesize two modes for PV BC synapses. In the absence of cholinergic neuromodulation, PV BCs are optimally tuned to transfer information at theta frequency, which might be advantageous in encoding the onset of salient stimuli [43]. In the presence of cholinergic neuromodulation, when PV BCs may become depolarized [42] and are more likely to fire at gamma frequency, the information processing capability is reduced, gaining synapse stability, which would promote populationlevel network synchronization. Future studies that employ “in vivo” spike trains could corroborate these hypotheses.
We are continuing our quest to quantify the information processing properties of synapses using more novel techniques than mutual information. Computational mechanics ([44, 45]) gives us a theory and a basis for understanding time series from a process as a hidden Markov model with multiple states. How the model changes as input frequency is varied, or neuromodulation is applied, would give us an even better categorization of the synaptic processes. Furthermore, these measures can be applied to output processes alone, without any information about the input stimuli, allowing a much more robust classification of signals measured from synapses in vivo or in vitro.
Understanding the interaction of the synaptic dynamics with voltage oscillations in the hippocampus is the next step to connecting the dots between synaptic plasticity and it effect on the mechanisms involved in learning and memory. We have preliminary results in a simple deterministic model and will be extending these to stochastic models of microcircuit rhythms. We believe that a careful piecing together of the model and behavior will guide our intuition much more than a large scale numerical approach, and it will more easily interface with electrophysiological experiments on actual neurons in the hippocampus.
Abbreviations
 BC:

: basket cell
 CA1:

: Cornu Ammonis, earlier name for hippocampus
 FD:

: facilitation and depression
 IPSC:

: inhibitory postsynaptic current
 ISI:

: interspike interval
 KL:

: Kozachenko and Leonenko
 KSG:

: Kraskov, Stögbauer, and Grassberger
 mAChR:

: muscarinic acetylcholine receptors
 MCMC:

: Monte Carlo Markov Chain
 MI:

: Mutual Information
 NT:

: neurotransmitter
 PSR:

: postsynaptic response
 PV:

: parvalbuminpositive
References
Lawrence JJ, Haario H, Stone EF. Presynaptic cholinergic neuromodulation alters the temporal dynamics of shortterm depression at parvalbuminpositive basket cell synapses from juvenile CA1 mouse hippocampus. J Neurophysiol. 2015;113(7):2408–19.
Abbott LF, Regehr WG. Synaptic computation. Nature. 2004;431:796–803.
Anwar H, Li X, Bucher D, Nadim F. Functional roles of shortterm synaptic plasticity with an emphasis on inhibition. Curr Opin Neurobiol. 2017;43(Supplement C):71–8. Neurobiology of Learning and Plasticity.
Vogels T, Froemke R, Doyon N, Gilson M, Haas J, Liu R, Maffei A, Miller P, Wierenga C, Woodin M, Zenke F, Sprekeler H. Inhibitory synaptic plasticity: spike timingdependence and putative network function. Front Neural Circuits. 2013;7:119.
Destexhe A, Marder E. Plasticity in single neuron and circuit computations. Nature. 2004;431:789–95.
Abbott LF, Varela JA, Sen K, Nelson SB. Synaptic depression and cortical gain control. Science. 1997;275(5297):221–4.
Fauth M, Wörgötter F, Tetzlaff C. The formation of multisynaptic connections by the interaction of synaptic and structural plasticity and their functional consequences. PLoS Comput Biol. 2015;11(1):e1004031.
Cartling B. Control of neural information transmission by synaptic dynamics. J Theor Biol. 2002;214(2):275–92.
Fuhrmann G, Segev I, Markram H, Tsodyks M. Coding of temporal information by activitydependent synapses. J Neurophysiol. 2002;87(1):140–8.
Yang Z, Hennig MH, Postlethwaite M, Forsythe ID, Graham BP. Wideband information transmission at the calyx of held. Neural Comput. 2009;21(4):991–1017. PMID: 19018705.
Tsodyks M, Pawelzik K, Markram H. Neural networks with dynamic synapses. Neural Comput. 1998;10(4):821–35. https://doi.org/10.1162/089976698300017502.
Zador A. Impact of synaptic unreliability on the information transmitted by spiking neurons. J Neurophysiol. 1998;79(3):1219–29. https://doi.org/10.1152/jn.1998.79.3.1219. PMID: 9497403.
Goldman MS, Maldonado P, Abbott LF. Redundancy reduction and sustained firing with stochastic depressing synapses. J Neurosci. 2002;22(2):584–91. https://doi.org/10.1523/JNEUROSCI.220200584.2002. http://www.jneurosci.org/content/22/2/584.
London M, Schreibman A, Häusser M, Larkum ME, Segev I. The information efficacy of a synapse. Nat Neurosci. 2002;5:332–40.
Stone E, Haario H, Lawrence JJ. A kinetic model for the frequency dependence of cholinergic modulation at hippocampal GABAergic synapses. Math Biosci. 2014;258:162–75.
Lee CCJ, Anton M, Poon CS, McRae GJ. A kinetic model unifying presynaptic shortterm facilitation and depression. J Comput Neurosci. 2008;26(3):459–73.
Khanin R, Parnas I, Parnas H. On the feedback between theory and experiment in elucidating the molecular mechanisms underlying neurotransmitter release. Bull Math Biol. 2006;68(5):997–1009.
Markram H, Wang Y, Tsodyks M. Differential signaling via the same axon of neocortical pyramidal neurons. Proc Natl Acad Sci USA. 1998;95(9):5323–8.
Dittman JS, Kreitzer AC, Regehr WG. Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. J Neurosci. 2000;20(4):1374–85.
Buzsáki G, Wang XJ. Mechanisms of gamma oscillations. Annu Rev Neurosci. 2012;35(1):203–25. PMID: 22443509.
Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci USA. 1997;94(2):719–23.
Lu T, Trussell LO. Inhibitory transmission mediated by asynchronous transmitter release. Neuron. 2000;26(3):683–94.
Zucker RS, Regehr WG. Shortterm synaptic plasticity. Annu Rev Physiol. 2002;64(1):355–405. PMID: 11826273.
Dittman JS, Regehr WG. Calcium dependence and recovery kinetics of presynaptic depression at the climbing fiber to Purkinje cell synapse. J Neurosci. 1998;18(16):6147–62.
Stevens CF, Wesseling JF. Activitydependent modulation of the rate at which synaptic vesicles become available to undergo exocytosis. Neuron. 1998;21(2):415–24.
von Gersdorff H, Schneggenburger R, Weis S, Neher E. Presynaptic depression at a calyx synapse: the small contribution of metabotropic glutamate receptors. J Neurosci. 1997;17(21):8137–46.
Wang LY, Kaczmarek LK. Highfrequency firing helps replenish the readily releasable pool of synaptic vesicles. Nature. 1998;394:384–8.
González JC, Lignani G, Maroto M, Baldelli P, HernándezGuijo JM. Presynaptic muscarinic receptors reduce synaptic depression and facilitate its recovery at hippocampal GABAergic synapses. Cereb Cortex. 2014;24(7):1818–31.
Haario H, Saksman E, Tamminen J. An adaptive Metropolis algorithm. Bernoulli. 2001;7(2):223–42.
Haario H, Laine M, Mira A, Saksman E. DRAM: efficient adaptive MCMC. Stat Comput. 2006;16(4):339–54.
Stevens CF. Quantal release of neurotransmitter and longterm potentiation. Cell. 1993;72(Supplement):55–63.
Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27(3):379–423.
Freedman D, Diaconis P. On the histogram as a density estimator: L2 theory. Z Wahrscheinlichkeitstheor Verw Geb. 1981;57(4):453–76.
McGill WJ. Multivariate information transmission. Psychometrika. 1954;19(2):97–116.
Kwak N, Choi CH. Input feature selection by mutual information based on Parzen window. IEEE Trans Pattern Anal Mach Intell. 2002;24(12):1667–71.
Kozachenko LF, Leonenko NN. Sample estimate of the entropy of a random vector. Probl Inf Transm. 1987;23(1–2):95–101.
Kraskov A, Stögbauer H, Grassberger P. Estimating mutual information. Phys Rev E. 2004;69:066138.
Lindner B, Gangloff D, Longtin A, Lewis JE. Broadband coding with dynamic synapses. J Neurosci. 2009;29(7):2076–87. https://doi.org/10.1523/JNEUROSCI.370208.2009. http://www.jneurosci.org/content/29/7/2076.
Rotman Z, Deng PY, Klyachko VA. Shortterm plasticity optimizes synaptic information transmission. J Neurosci. 2011;31(41):14800–9. https://doi.org/10.1523/JNEUROSCI.323111.2011. http://www.jneurosci.org/content/31/41/14800.
Rosenbaum R, Rubin J, Doiron B. Short term synaptic depression imposes a frequency dependent filter on synaptic information transfer. PLoS Comput Biol. 2012;8(6):e1002557. https://doi.org/10.1371/journal.pcbi.1002557.
Varga C, Golshani P, Soltesz I. Frequencyinvariant temporal ordering of interneuronal discharges during hippocampal oscillations in awake mice. Proc Natl Acad Sci USA. 2012;109(40):E2726–E2734.
Yi F, Ball J, Stoll KE, Satpute VC, Mitchell SM, Pauli JL, Holloway BB, Johnston AD, Nathanson NM, Deisseroth K, Gerber DJ, Tonegawa S, Lawrence JJ. Direct excitation of parvalbuminpositive interneurons by M1 muscarinic acetylcholine receptors: roles in cellular excitability, inhibitory transmission and cognition. J Physiol. 2014;592(16):3463–94.
Pouille F, Scanziani M. Routing of spike series by dynamic circuits in the hippocampus. Nature. 2004;429:717–23.
Crutchfield JP. The calculi of emergence: computation, dynamics, and induction. Physica D. 1994;75:11–54.
Shalizi CR, Crutchfield JP. Computational mechanics: pattern and prediction, structure and simplicity. J Stat Phys. 2001;104(3):817–79.
Kesten H. Random difference equations and renewal theory for products of random matrices. Acta Math. 1973;131:207–48.
Vervaat W. On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables. Adv Appl Probab. 1979;11(4):750–83.
Dufresne D. Algebraic properties of beta and gamma distributions, and applications. Adv Appl Math. 1998;20(3):285–99.
Acknowledgements
We acknowledge David Patterson for his helpful comments on some of the statistical techniques.
Availability of data and materials
Please contact author for data requests.
Funding
Electrophysiology experiments were performed in the laboratory of Chris McBain with intramural support from National Institute of Child Health and Human Development. Later work was supported by National Center for Research Resources Grant P20RR015583, National Institutes of Health Grant R0106968901A1, and startup support from the University of Montana Office of the Vice President for Research (to JJL).
Author information
Authors and Affiliations
Contributions
EBM did all the statistical analysis in the study. JJL carried out all experiments and preliminary data analysis. EFS conceived of the study, developed the design and analyzed the results. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Let \(\theta_{1},\theta_{2},\ldots \) be i.i.d. \(\mathbb{R}^{2}\)valued (\(d\geq 1\)) random variables with generic copy θ and independent of \(X_{0}\). Suppose that \(X_{n}=\varPsi (X_{n1},\theta_{n} )\) for all \(n\geq1\) and a continuous function \(\varPsi: \mathbb{R}^{d+1}\rightarrow \mathbb{R}\). If \(X_{n}\) converges in distribution to \(X_{\infty}\), then
Appendix B
Suppose independent, identically distributed increases in the amount of calcium \(\{\Delta_{n}\}_{n\geq1}\) occur at times \(\{t_{n}\}_{n\geq1}\), and we are interested in the distribution of the amount of calcium accumulated. In one dimension, the random recurrence equation for the calcium concentration is given by
where \(A_{n}= e^{T_{n}/\tau_{\mathrm{ca}}}\), and the waiting times \(T_{1}=t_{1}\), \(T_{n} = t_{n}t_{n1}\), \(n\geq2\), are i.i.d., making the \(\{T_{n}\}\) a renewal process. Moreover, \((A_{n},\Delta_{n} )\) are assumed to be i.i.d. vectors. \(C_{0}\) is the base calcium concentration which is assumed to be zero.
Iterating (10) leads to
for each \(n\geq1\). Using the independence assumptions and replacing \((A_{k},\Delta _{k} )_{1\leq k\leq n}\) with the copy \((A_{n+1k},\Delta _{n+1k} )_{1\leq k\leq n}\) we observe that
where \(\stackrel{d}{=}\) denotes equality in distribution.
A fundamental theoretical result shown in [46] asserts that if
then the series
will converge w.p.1 and the distribution of \(C_{n}\) converges to that of C, independently of \(C_{0}\). Also, by the continuous mapping theorem (see Appendix A), we infer from (10) that if \(C_{n}\stackrel {d}{\rightarrow}C\), then C satisfies the distributional identity
Following [47] let \(X:=AC\), where A, C and X are independent. Then we can write
and hence
Iterating (11) results in
where \(A_{n} = e^{T_{n}/\tau_{\mathrm{ca}}}\).
Note that if \(T_{n}\) is exponentially distributed with rate parameter λ, then random variable \(A_{n}\) has a beta distribution with shape parameters \((\lambda\tau_{\mathrm{ca}}, 1)\) and is denoted by \(A_{n}\sim \textit{Beta} (\lambda\tau_{\mathrm{ca}},1 )\). We then apply beta–gamma algebraic identities ([48]) which state that
Alternatively,
Applying (12) in (11), and considering that \(A_{n}\sim \textit{Beta} (\lambda\tau_{\mathrm{ca}},1 )\), then
Note that \(C\stackrel{d}{=}X+\Delta\) implies
Finally,
Note that a random variable X that is gammadistributed with shape α and scale θ parameters is denoted by \(\mathit{Gamma} (\alpha ,\theta )\) and the corresponding probability density function is
Also, a random variable X that is betadistributed with shape parameters α and β is denoted by \(\mathit{Beta} (\alpha,\beta )\) and the probability density function of the beta distribution is
where \(B(\alpha,\beta)=\frac{\varGamma(\alpha)\varGamma(\beta)}{\varGamma(\alpha +\beta)}\).
Appendix C
We address computing an approximate distribution for Pr. For ease of notation, let \(X=\overline{R}\) be a random variable defined as follows:
Let the random variable T have the exponential distribution with probability density function
We can compute an analytical expression for probability density function (PDF) of fixed point R̅ using the distribution for T.
The transformation \(X = g(T) = \frac {1e^{k_{\mathrm{min}}T}}{1(1P_{\mathrm{max}})e^{k_{\mathrm{min}}T}}\) is a 1–1 transformation from \(\mathcal{T} = \{t t > 0\}\) to \(\mathcal{X} = \{x 0 < x<1\}\) with inverse \(T= g^{1}(X) = \frac{1}{k_{\mathrm{min}}}\log (\frac {1ax}{1x} )\) and Jacobian
where \(a=1P_{\mathrm{max}}\). By the rule for functions of random variables, the probability density function of X is
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Bayat Mokhtari, E., Lawrence, J.J. & Stone, E.F. Effect of Neuromodulation of Shortterm Plasticity on Information Processing in Hippocampal Interneuron Synapses. J. Math. Neurosc. 8, 7 (2018). https://doi.org/10.1186/s134080180062z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s134080180062z
Keywords
 Shortterm synaptic plasticity
 Mutual information
 Cholinergic modulation
 Hippocampal GABAergic synapses