- Research
- Open Access

# A Rate-Reduced Neuron Model for Complex Spiking Behavior

- Koen Dijkstra
^{1}Email authorView ORCID ID profile, - Yuri A. Kuznetsov
^{1, 2}, - Michel J. A. M. van Putten
^{3, 4}and - Stephan A. van Gils
^{1}

**7**:13

https://doi.org/10.1186/s13408-017-0055-3

© The Author(s) 2017

**Received: **29 March 2017

**Accepted: **21 November 2017

**Published: **11 December 2017

## Abstract

We present a simple rate-reduced neuron model that captures a wide range of complex, biologically plausible, and physiologically relevant spiking behavior. This includes spike-frequency adaptation, postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking. Furthermore, the model can mimic different neuronal filter properties. It can be used to extend existing neural field models, adding more biological realism and yielding a richer dynamical structure. The model is based on a slight variation of the Rulkov map.

## 1 Introduction

Networks of coupled neurons quickly become analytically intractable and computationally infeasible due to their large state and parameter spaces. Therefore, starting with the work of Beurle [1], a popular modeling approach has been the development of continuum models, called neural fields, that describe the average activity of large populations of neurons (Wilson and Cowan [2, 3], Nunez [4], Amari [5, 6]). In neural field models, the network architecture is represented by connectivity functions and the corresponding transmission delays, while differential operators characterize synaptic dynamics. All intrinsic properties of the underlying neuronal populations are condensed into firing rate functions, which replace individual neuronal action potentials and map the sum of all incoming synaptic currents to an outgoing firing rate. While some neural field models incorporate spike-frequency adaptation (Pinto and Ermentrout [7, 8], Coombes and Owen [9], Amari [10, 11]), more complex spiking behavior such as postinhibitory rebound, phasic spiking and accommodation, first-spike latency, and inhibition-induced spiking is mostly absent, an exception being the recent reduction of the Izhikevich neuron (Nicola and Campbell [12], Visser and van Gils [13]).

Here, we present a rate-reduced model that is based on a slight modification of the Rulkov map (Rulkov [14], Rulkov et al. [15]), a phenomenological, map-based single neuron model. Similar to Izhikevich neurons (Izhikevich [16]), the Rulkov map can mimic a wide variety of biologically realistic spiking patterns, all of which are preserved by our rate formulation. The rate-reduced model can therefore be used to incorporate all the aforementioned types of spiking behavior into existing neural field models.

This paper is organized as follows. In Sect. 2, we present the single spiking neuron model our rate-reduced model is based upon, and illustrate different spiking patterns and filter properties. In Sect. 3 we heuristically reduce the single neuron model to a rate-based formulation, and show that the rate-reduced model preserves spiking and filter properties. We give an example of a neural field that is augmented with our rate model in Sect. 4 and end with a discussion in Sect. 5.

## 2 Single Spiking Neuron Model

In this section we present a phenomenological, map-based single neuron model, which is a slight modification of the Rulkov map (Rulkov [14], Rulkov et al. [15]). The Rulkov map was designed to mimic the spiking and spiking-bursting activity of many real biological neurons. It has computational advantages because the map is easier to iterate than continuous dynamical systems. Furthermore, as we will show in this paper, it is straightforward to obtain a rate-reduced version of a slightly modified version of the Rulkov model.

*v*, resembling the neuronal membrane potential, and a slow adaptation variable

*a*. In our modification of the original model, the adaptation only implicitly depends on the membrane potential through a binary spiking variable. As we will show in the next section, this modification allows for an easy decoupling of the membrane potential and adaptation variable, and therefore a straightforward rate reduction of the model. The cost of the modification is the loss of subthreshold oscillation dynamics. The modified Rulkov map is given by

*f*is chosen to mimic the shape of neuronal action potentials. The variable

*u*in (SNM) represents external (synaptic) input to the cell, which we assume to be given, and

*s*is a binary indicator variable, given by

*n*if its membrane potential is reset to \(v_{n+1}=-50\) in the next iteration. It follows from (1) that the spiking condition in (2) is satisfied if and only if

*u*. To mimic spiking patterns of real biological neurons, one time step should correspond to approximately 0.5 ms of time.

The parameter \(0<\varepsilon<1\) in (SNM) sets the time scale of the adaptation variable and *γ* determines the adaptation strength. The parameter *θ* can be interpreted as a spiking threshold: for constant external input \(u_{n}=\varphi\), the neuron spikes persistently if and only if \(\varphi>\theta\). After a change of variable \(a_{n}\rightarrow a_{n}+(1-\kappa)\varphi\) and parameter \(\theta\rightarrow\theta-\varphi\), constant external input vanishes. Therefore, the asymptotic response to constant input does not depend on the parameter *κ*. However, the parameter *κ* can be used to tune the transient response of the neuron to changes in external input, as it determines how input is divided between the fast and the slow subsystem of (SNM). For parameter values \(\kappa \in[0,1]\), *κ* can be interpreted as the fraction of the input that is applied to the fast subsystem, and therefore determines (together with *ε*) how quickly the membrane potential dynamics react to changes in input. Since the effective drive of the system is given by \(\kappa u_{n}-a_{n}\), changes in external input are initially magnified for \(\kappa>0\). Asymptotically, this is then counterbalanced by additional adaption. Finally, for \(\kappa<0\), the initial response of the membrane potential to a change in input is reversed, i.e. an increase in external input initially has an inhibitory effect, and a decrease in external input initially has an excitatory effect.

### 2.1 Fast Dynamics

*f*given in (1) can easily be changed to support bistability in the fast subsystem, which allows for some additional dynamics such as ‘chattering’, a response of periodic bursts of spikes to constant input (Rulkov [14]).

### 2.2 Spiking Patterns

Izhikevich [17] classified different features of biological spiking neurons, most of which can be mimicked by our modified Rulkov model (SNM). In the following, we discuss the role of the model parameters with the help of a few physiologically relevant examples.

#### 2.2.1 Tonic Spiking/Fast Spiking

*κ*.

#### 2.2.2 Spike-Frequency Adaptation/Regular Spiking

Most cortical excitatory neurons are not ‘fast spiking’, but respond to a step input with a spike train of slowly decreasing frequency, a phenomenon known as ‘spike-frequency adaptation’ (also called ‘regular spiking’). This kind of spiking behavior can be modeled by applying all input to the fast subsystem (\(\kappa=1\)) and choosing \(\varepsilon\ll 1\). The adaptation variable then acts as a slow time scale, such that a single spike has a long-lasting effect on the adaptation variable (Fig. 2B). The level of adaptation can be controlled with *γ*.

#### 2.2.3 Rebound Spiking and Accommodation

The excitability of some neurons is temporarily enhanced after they are released from hyperpolarizing current, which can result in the firing of one or more ‘rebound spikes’. Rebound spiking is an important mechanism for central pattern generation for heartbeat and other motor patterns in many neuronal systems (Chik et al. [18]). In the modified Rulkov map, postinhibitory rebound spiking can be modeled by choosing \(\kappa>1\). In this case, the adaptation variable will become negative while the cell gets hyperpolarized, which can be sufficient to trigger temporary spiking once the inhibitory input is turned off (Fig. 2C). Similarly, excitatory ‘subthreshold’ (\(u_{n}<\theta\)) input can elicit temporary spiking if the input is ramped up sufficiently fast (Fig. 2D).

#### 2.2.4 Spike Latency and Inhibition-Induced Spiking

If all input is applied to the slow subsystem (\(\kappa=0\)), there can be a large latency between the input onset and the first spike of the neuron, yielding a delayed response to a pulse input (Fig. 2E). For \(\kappa<0\), the initial response of the model to changes in input is reversed: excitation initially leads to hyperpolarization of the neuron and inhibition can induce temporary spiking (Fig. 2F). This inhibition-induced spiking is a feature of many thalamo-cortical neurons (Izhikevich [17]).

### 2.3 Neuronal Filtering

*κ*can tune transient spiking responses of the modified Rulkov map to changes in external input. In reality, neurons often receive strong periodic input, e.g. from a synchronous neuronal population nearby. Information transfer between neurons may be optimized by temporal filtering, which is especially important when the same signal transmits distinct messages (Blumhagen et al. [19]). In this section, we study the response of (SNM) to harmonic input

*φ*, phase shift \(\vartheta\in[0,2\pi)\), and where \(\omega\in[0,1000]\) corresponds to the input frequency in Hz assuming that one iteration of (SNM) corresponds to 0.5 ms of time. A Rulkov neuron (SNM) will never spike if

*ω*and

*ϑ*in (6) are chosen such that

*θ*for a sufficiently long time. The modulus of the frequency response (11) is given by

*κ*can therefore be used to model filter properties of the neuron. For \(-1+\varepsilon<\kappa<1\) high frequencies get attenuated and a neuron can act as a low-pass filter in the sense that periodic input within a certain amplitude range only elicits a spiking response if its frequency is low enough (Fig. 4A). Similarly, for \(\kappa>1\) (and \(\kappa<-1+\varepsilon\)), high frequencies get amplified and there exists an amplitude range for which the neuron acts as a high-pass filter (Fig. 4B).

## 3 The Rate-Reduced Neuron Model

Neural field models are based on the assumption that neuronal populations convey all relevant information in their (average) firing rates. If one wants to incorporate certain spiking dynamics, one has to come up with a corresponding rate-reduced formulation first. In this section we present a rate-reduced version of the Rulkov model (SNM) that can be used to extend existing neural field models.

*a*in the spiking neuron model (SNM) only implicitly depends on the membrane potential

*v*via the binary spiking variable

*s*. We can therefore decouple the adaption variable from the membrane potential by replacing the binary spiking variable defined in (2) by the instantaneous firing rate (5) of the fast subsystem (FSS), yielding

### 3.1 Frequency Response of the Reduced Model

### 3.2 The Firing Rate Function

*P*of its limit cycle lies in \(\mathbb{N}\) for all positive suprathreshold drives

*ς*. Therefore, the spiking rate function (5) is staircase-like, with points of discontinuity whenever \(P\to P+1\). Let \(\lbrace\varsigma_{1},\varsigma_{2},\ldots\rbrace\) denote the set of all points of discontinuity of the firing rate function in decreasing order. For \(\varsigma\geq\varsigma_{1}=1\) the ‘reset potential’ \(v=-50\) in (FSS) is immediately mapped to a non-negative number, and the neuron is therefore spiking at its maximal frequency of once in three iterations. Similarly, the voltage stays in the left interval for two iterations and the neuron is spiking once in four iterations for \(\varsigma_{1}>\varsigma\geq\varsigma_{2}=\frac {1}{2}(5-\sqrt{17})\). In general, at \(\varsigma_{k}\), there is a jump discontinuity of size

*H*is the Heaviside step function and

*θ*in (SNM), it is natural to define an expected firing rate \(\langle S\rangle\colon\mathbb{R}\mapsto\mathbb{R}\), given by

*N*(Fig. 7).

## 4 Augmenting Neural Fields

When large populations of neurons are modeled by networks of individual, interconnected cells, the high dimensionality of state and parameter spaces makes mathematical analysis intractable and numerical simulations costly. Moreover, large network simulations provide little insight into global dynamical properties. A popular modeling approach to circumventing the aforementioned problems is the use of neural field equations. These models aim to describe the dynamics of large neuronal populations, where spikes of individual neurons are replaced by (averaged) spiking rates and space is continuous. Another advantage of neural fields is that they are often well suited to model experimental data. In brain slice preparations, spiking rates can be measured with an extracellular electrode, while intracellular recordings are much more involved. Furthermore, the most common clinical measurement techniques of the brain, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), both represent the average activity of large groups of neurons and may therefore be better modeled by population equations. The first neural field model can be attributed to Beurle [1], however, the theory really took off with the work of Wilson and Cowan [2, 3], Amari [5, 6], and Nunez [4].

*j*and position \(x'\) to neurons of population

*i*and position

*x*. The connectivity kernels \(J_{ij}\colon \overline{\varOmega}\times\overline{\varOmega}\mapsto\mathbb{R}\) are assumed to be isotropic and given by

*j*, \(\eta_{ij}\) is the maximal connection strength, and \(\mu_{ij}\) is the spatial decay rate of the connectivity. Both firing rate functions \(S_{i}\colon\mathbb{R}\mapsto\mathbb{R}\) are chosen to approximate the expected firing rate of Rulkov neurons (SNM) with a noise level of \(\sigma^{2}=\frac{1}{4}\) (Fig. 7),

*i*is given by

*N*denotes the total number of neurons in the network, and \(c_{ij}\) is the connection strength from neuron

*j*to neuron

*i*. To match the parameters in Table 1, we split the total population in two subpopulations of 300 neurons each, which are both equidistantly placed on the interval \([-1,1]\). Neurons within the same subpopulation share the same intrinsic parameters, and uncorrelated (in space and time) Gaussian noise is added to the threshold parameters. Finally, the connection strengths in the Rulkov network are given by

*i*, respectively.

**Parameter overview for the neural field** (
**ANF**
)

Parameter | \(\alpha_{i}\) | \(\theta_{i}\) | \(\kappa_{i}\) | \(\varepsilon_{i}\) | \(\gamma_{i}\) | \(\rho_{i}\) | \(\eta_{i1}\) | \(\eta_{i2}\) | \(\mu_{i1}\) | \(\mu_{i2}\) |
---|---|---|---|---|---|---|---|---|---|---|

Population 1 | \(\frac{1}{20}\) | \(\frac{1}{2}\) | 2 | \(\frac{1}{1000}\) | 5 | 150 | \(\frac{2}{3}\) | \(-\frac{1}{3}\) | 4 | 1 |

Population 2 | \(\frac{1}{10}\) | \(\frac{4}{5}\) | \(\frac{1}{10}\) | \(\frac{1}{100}\) | 2 | 150 | \(\frac{11}{15}\) | \(-\frac{11}{30}\) | \(\frac{17}{4}\) | \(\frac{11}{10}\) |

## 5 Discussion

This paper presents a simple rate-reduced neuron model that is based on a variation of the Rulkov map (Rulkov [14], Rulkov et al. [15]), and can be used to incorporate a variety of non-trivial spiking behavior into existing neural field models.

The modified Rulkov map (SNM) is a phenomenological, two-dimensional single neuron model. The isolated dynamics of its fast time scale either generates a stable limit cycle, mimicking spiking activity, or a stable fixed point, corresponding to a neuron at rest (Fig. 1). The slow time scale of the Rulkov map acts as a dynamic spiking threshold and emulates the combined effect of slow recovery processes. The modified Rulkov map can mimic a wide variety of spiking patterns, such as spike-frequency adaptation, postinhibitory rebound, phasic spiking, spike accommodation, spike latency and inhibition-induced spiking (Fig. 2). Furthermore, the model can be used to model neuronal filter properties. Depending on how external input is applied to the model, it can act as either a high-pass or low-pass filter (Figs. 3 and 4).

The rate-reduced model (RNM) is derived heuristically and given by a simple one-dimensional differential equation. On the single cell level, the rate-reduced model closely mimics the spiking dynamics (Fig. 5) and filter properties (Fig. 6) of the full spiking neuron model. While a close approximation of the (expected) firing rate of Rulkov neurons (Fig. 7) is needed to mimic their behavior quantitatively, the types of qualitative dynamics of the rate-reduced model do not depend on the exact choice of firing rate function.

Due to its simplicity, it is straightforward to add the rate-reduced model to existing neural field models. In the resulting augmented equations, parameters can be chosen according to the spiking behavior of a single isolated cell. In our particular example (ANF), the emerging spatiotemporal pattern closely resembles the dynamics of the corresponding spiking neural network (Fig. 8). We believe that this is an elegant way to add more biological realism to existing neural field models, while simultaneously enriching their dynamical structure.

### 5.1 Conclusions

We used a variation of a simple toy model of a spiking neuron (Rulkov [14], Rulkov et al. [15]) to derive a corresponding rate-reduced model. While being purely phenomenological, the model could mimic a wide variety of biologically observed spiking behaviors, yielding a simple way to incorporate complex spiking behavior into existing neural field models. Since all parameters in the resulting augmented neural field equations have a representative in the spiking neuron network (and vice versa), this greatly simplifies the otherwise often problematic translation from results obtained by neural field models back to biophysical properties of spiking networks. An example demonstrated that the augmented neural field equations can produce spatiotemporal patterns that cannot be generated with corresponding ‘classical’ neural fields.

## Declarations

### Availability of data and materials

The conclusions of this paper are solely based on mathematical models.

### Funding

K.D. was supported by a grant from the Twente Graduate School (TGS).

### Authors’ contributions

Conceptualization, K.D., Y.K., M.v.P. and S.v.G.; methodology, K.D. and S.v.G.; investigation, K.D.; writing original Draft, K.D.; writing review & Editing, K.D., Y.K., M.v.P. and S.v.G.; visualization, K.D.; supervision, Y.K., M.v.P. and S.v.G. All authors read and approved the final manuscript.

### Ethics approval and consent to participate

Our study don’t involve human participants, human data or human tissue.

### Competing interests

The authors declare no competing financial interests.

### Consent for publication

This manuscript does not contain any individual person’s data.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Beurle RL. Properties of a mass of cells capable of regenerating pulses. Philos Trans R Soc Lond B. 1956;240:55–94. View ArticleGoogle Scholar
- Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24. View ArticleGoogle Scholar
- Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13:55–80. View ArticleMATHGoogle Scholar
- Nunez PL. The brain wave equation: A model for the EEG. Math Biosci. 1974;21:279–97. View ArticleMATHGoogle Scholar
- Amari S. Homogeneous nets of neuron-like elements. Biol Cybern. 1975;17:211–20. MathSciNetView ArticleMATHGoogle Scholar
- Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern. 1977;27:77–87. MathSciNetView ArticleMATHGoogle Scholar
- Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math. 2001;62:206–25. MathSciNetView ArticleMATHGoogle Scholar
- Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM J Appl Math. 2001;62:226–43. MathSciNetView ArticleMATHGoogle Scholar
- Coombes S, Owen MR. Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett. 2005;94:148102. View ArticleGoogle Scholar
- Kilpatrick ZP, Bressloff PC. Effects of synaptic depression and adaptation on spatiotemporal dynamics of an excitatory neuronal network. Physica D. 2010;239:547–60. MathSciNetView ArticleMATHGoogle Scholar
- Kilpatrick ZP, Bressloff PC. Stability of bumps in piecewise smooth neural fields with nonlinear adaptation. Physica D. 2010;239:1048–60. MathSciNetView ArticleMATHGoogle Scholar
- Nicola W, Campbell SA. Bifurcations of large networks of two-dimensional integrate and fire neurons. J Comput Neurosci. 2013;35:87–108. MathSciNetView ArticleMATHGoogle Scholar
- Visser S, van Gils SA. Lumping Izhikevich neurons. EPJ Nonlinear Biomed Phys. 2014;2:226–43. View ArticleGoogle Scholar
- Rulkov NF. Modeling of spiking-bursting neural behavior using two-dimensional map. Phys Rev E. 2002;65:041922. MathSciNetView ArticleMATHGoogle Scholar
- Rulkov NF, Tomofeev I, Bazhenov M. Oscillations in large-scale cortical networks: Map-based model. J Comput Neurosci. 2004;17:203–23. View ArticleGoogle Scholar
- Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw. 2003;14:1569–72. View ArticleGoogle Scholar
- Izhikevich EM. Which model to use for cortical spiking neurons? IEEE Trans Neural Netw. 2004;15:1063–70. View ArticleGoogle Scholar
- Chik DTW, Coombes S, Wang ZD. Clustering through postinhibitory rebound in synaptically coupled neurons. Phys Rev E. 2004;70:011908. MathSciNetView ArticleGoogle Scholar
- Blumhagen F, Zhu P, Shum J, Schärer Y-PZ, Yaksi E, Deisseroth K, Friedrich RW. Neuronal filtering of multiplexed odour representations. Nature. 2011;479:493–8. View ArticleGoogle Scholar