Skip to main content

Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: a review


Many biological and neural systems can be seen as networks of interacting periodic processes. Importantly, their functionality, i.e., whether these networks can perform their function or not, depends on the emerging collective dynamics of the network. Synchrony of oscillations is one of the most prominent examples of such collective behavior and has been associated both with function and dysfunction. Understanding how network structure and interactions, as well as the microscopic properties of individual units, shape the emerging collective dynamics is critical to find factors that lead to malfunction. However, many biological systems such as the brain consist of a large number of dynamical units. Hence, their analysis has either relied on simplified heuristic models on a coarse scale, or the analysis comes at a huge computational cost. Here we review recently introduced approaches, known as the Ott–Antonsen and Watanabe–Strogatz reductions, allowing one to simplify the analysis by bridging small and large scales. Thus, reduced model equations are obtained that exactly describe the collective dynamics for each subpopulation in the oscillator network via few collective variables only. The resulting equations are next-generation models: Rather than being heuristic, they exactly link microscopic and macroscopic descriptions and therefore accurately capture microscopic properties of the underlying system. At the same time, they are sufficiently simple to analyze without great computational effort. In the last decade, these reduction methods have become instrumental in understanding how network structure and interactions shape the collective dynamics and the emergence of synchrony. We review this progress based on concrete examples and outline possible limitations. Finally, we discuss how linking the reduced models with experimental data can guide the way towards the development of new treatment approaches, for example, for neurological disease.


Many systems in neuroscience and biology are governed on different levels by interacting periodic processes [1]. Networks of coupled oscillators provide models for such systems. Each node in the network is an oscillator (a dynamical process) and the network structure encodes which oscillators interact with each other [2]. In neuroscience, individual oscillators could be single neurons in microcircuits or neural masses on a more macroscopic level [3]. Other prominent examples in biology include cells in heart tissue [4], flashing fireflies [5], the dynamics of cilia and flagella [6], gait patterns of animals [7] or humans [8], cells in the suprachiasmatic nucleus in the brain generating the master clock for the circadian rhythm [911], hormone rhythms [12], suspensions of yeast cells undergoing metabolic oscillations [13, 14], and life cycles of phytoplankton in chemostats [15].

The functionality—whether function or dysfunction—of these networks depends on the collective dynamics of the interacting oscillatory nodes. Hence, one major challenge is to understand how the underlying network shapes these collective dynamics. In particular, one would like to understand how the interplay of network properties (for example, coupling topology and the strength of interactions) and characteristics of the individual nodes shape the emergent dynamics. The question of relating network structure and dynamics is particularly pertinent in the study of large-scale brain dynamics: For example, one can investigate how emergent functional connectivity (a dynamical property) arises from specific structural connectomes [16, 17], and how each of these relates to cognition or disease. Progress in this direction not only aims to identify how healthy or pathological dynamics is linked to the network structure, but also to develop new treatment approaches [1821].

One of the most prominent collective behaviors of an oscillator network occurs when nodes synchronize and oscillate in unison [2224]; indeed, most of the examples given above display synchrony in some form which appears to be essential to the proper functioning of biological life processes. Here we think of synchrony in a general way: It can come in many varieties, including phase synchrony where the state of different oscillators align exactly, or frequency synchrony where the oscillators’ frequencies coincide. Synchrony may be global across the entire network or localized in a particular part—the rest of the network being nonsynchronized—thus giving rise to synchrony patterns. How exactly the dynamics of synchrony patterns in an oscillator network relate to its functional properties is still not fully understood. In the brain, there are a wide range of rhythms but the presence of dominant rhythms in different frequency bands indicate that some level of synchrony is common at multiple scales [25, 26]. Indeed, synchrony has been associated with solving functional tasks including, but not limited to, memory [27], computational functions [28], cognition [29], attention [30, 31], routing of information [3133], control of gait and motion [34], or breathing [35, 36]. As a specific example, coordination of dynamics at the theta frequency (4–12 Hz) between hippocampus and frontal cortex is enhanced in spatial memory tasks [37]. At the same time, abnormal synchrony patterns are associated with malfunction in disorders such as epilepsy and Parkinson’s disease [3841]; evolving patterns of synchrony can for example be observed in electroencephapholographic (EEG) recordings throughout epileptogenesis in mice [42].

Using a detailed model of each node and a large number of nodes in the network, the task of relating network structure and dynamics is daunting. Hence, simplifying analytical reduction methods are needed that—rather than being purely computational—yield a mechanistic understanding of the inherent processes leading to a certain dynamic macroscopic behavior. If many biologically relevant state variables are considered in a microscopic model, each node is represented by a high-dimensional dynamical system by itself. Hence, a common approach is to simplify the description of each oscillatory node to its simplest form, a phase oscillator; in the reduced system the state of each oscillator is given by a single periodic phase variable that captures the state of the periodic process. In this case, biologically relevant details are captured by the evolution of the phase variable and its interaction with the phases of the other nodes. There are two important ways to get to a phase description of an oscillator network, both of which are common tools used, for example, in computational neuroscience; see [43, 44] for recent reviews. First, under the assumption of weak coupling one can go through a process of phase reduction to obtain a phase description [4550]. Second, one can—based on the biophysical properties of the system—impose a phase model such as the Kuramoto model [51] or a network of Theta neurons [52].

The main topic of this paper is an introduction to and a review of recent advances in exact mean-field reductions for networks of coupled oscillators. The main achievement is that for certain classes of oscillator networks, it is possible to replace a large number of nodes by a collective or mean-field variable that describe the network evolution exactly—thereby reducing the complexity of the problem immensely. In the neuroscientific context, each subpopulation may represent different populations of neurons that may exhibit temporal patterns of synchronization or activity [16, 17, 53]. Of course, mean-field approaches motivated by statistical physics have a long history in computational neuroscience to approximate the dynamics of large ensembles of units; see, for example, [54, 55] and references therein. They have been useful to elucidate, for example, dynamical mechanisms behind the emergence of rhythms in the gamma frequency band, such as the emergence of pyramidal-interneuronal gamma (PING) rhythm [56] or the interplay between different brain areas (for example, through phase-phase, phase-amplitude and amplitude comodulation) that can lead to frequency entrainment [57]. In terms of classical mean-field approaches, the pioneering works by Wilson and Cowan [58] and Amari [59] stand out: They derived heuristic equations for average neural population dynamics that are still widely used in neural modeling. Specifically, such models disregard fluctuations of individual unitsFootnote 1 and arrive at equations that approximate the evolution of means. By contrast, the exact mean-field reductions we discuss here, the Ott–Antonsen reduction and the Watanabe–Strogatz reduction, can be employed not only for infinite networks also for networks of finitely many oscillators. While these reductions only apply to specific classes of systems—and from a mathematical perspective reflect the special structure of these systems—they include models that have been widely used in neuroscience and beyond, such as the Kuramoto model. Compared to heuristic mean-field approximations, the resulting reduced equations are exactly derived from the microscopic equations of individual oscillators and thus capture properties of individual oscillators; because of this property these reduced equations have been referred to as being next-generation models [60]. Employing these models in modeling tasks provides a powerful opportunity to bridge the dynamics on microscopic and macroscopic scales.

To illustrate the mean-field reductions and their applicability, we focus here on networks that are organized into distinct (sub)populations because of their practical importance.Footnote 2 The mean-field reductions allow one to replace each subnetwork by a (low-dimensional) set of collective variables to obtain a set of dynamical equations for these variables. This set of mean-field equations describes the system exactly. For the classical Kuramoto model, which is widely used to understand synchronization phenomena, we will see below that the collective state is captured by a two-dimensional mean-field variable that encodes the synchrony of the population. Reducing to a set of mean-field equations provides a simplified—but often still sufficiently complex—description of the network dynamics that can be analyzed by using dynamical systems techniques [61, 62]. We will outline the classes of models for which the mean-field reductions apply and illustrate how these reduction techniques have been instrumental in the last decade to illuminate how the network properties relate to dynamical phenomena. We give a number of concrete examples, from Kuramoto’s problem about the emergence of synchrony in oscillator populations to the emergence of PING rhythms based on microscopic properties of neuronal networks.

There are many important questions and aspects that we cannot touch upon in this review, and we refer to already existing reviews and literature instead. First, we only consider oscillator networks where each (microscopic) node has a first-order description by a single phase variable. We will not cover other microscopic models such as second-order phase oscillators or oscillators with a phase and amplitudeFootnote 3 which can give rise to richer dynamics. Second, we do not comment on the validity of a phase reduction; for more information see for example [47, 50]. Third, the reductions we discuss have been essential to understand the emergence of synchrony patterns where coherence and incoherence coexist, also known as “chimeras.” Here, we only mention results relevant to the dynamics of coupled oscillator populations and refer to [6365] for recent reviews on chimeras. Fourth, the results mentioned here relate to results from network science [66, 67]. In particular, properties of the graph underlying the dynamical network relate to synchronization dynamics [6870]. Moreover, we also typically assume that the network structure is static and does not evolve over time. However, time-dependent network structures are clearly of interest—in particular in the context of plastic neural connectivity and neurological disease. An approach to these issues from the perspective of network science are temporal networks [71] while asynchronous networks take a more dynamical point of view; see [72] and references therein. Fifth, we restrict ourselves to deterministic dynamics where noise is absent. From a mathematical point of view, noise can simplify the analysis and recent results show that similar reduction methods apply [7375]. Finally, it is worth noting that other reduction approaches for oscillator networks have recently been explored [7678].

This review is written with a diverse readership in mind, ranging from mathematicians to computational biologists who want to use the reduced equations for modeling. In fact, this review is intended to have aspects of a tutorial and to provide an introduction to the Ott–Antonsen and Watanabe–Strogatz reductions as exact mean-field reductions and outlining what type of questions they have been instrumental in answering: We include three explicit examples how these mean-field reductions can be helpful in giving insights into the collective dynamics of (neuronal) oscillator networks.

In the following, we provide an outline how to approach this paper. The next section sets the stage by introducing the notion of a sinusoidally coupled network and we summarize the main oscillator models we relate to throughout the paper; these include the Kuramoto model and networks of Theta neurons (which are equivalent to Quadratic Integrate and Fire (QIF) neurons). In the third section, we give a general theory for the mean-field reductions and discuss their limitations: The methods include the Ott–Antonsen reduction for the mean-field limit of nonidentical oscillators and the Watanabe–Strogatz reduction for finite or infinite networks of identical oscillators. This section includes a certain level of mathematical detail to understand the ideas behind the derivation of the reduced equations (mathematically dense sections are marked with the symbol “*” and may be skipped at first reading). If you are mainly interested in applying the reduced equations, you may want to skip ahead to Sects. 3.1.2 and 3.2.2, which summarize the reduced equations for the models we study throughout the paper. In the fourth section, we apply the reductions and emphasize how they are useful to understand how synchrony and patterns of synchrony emerge in such oscillator networks. This includes a number of concrete examples. Since most of these considerations are theoretical and computational, we discuss in the last section how the mean-field reductions can be used to solve neuroscientific problems and be linked with experimental data. We conclude with some remarks and highlighting a number of open problems.

List of symbols

The following symbols will be used throughout this paper.

\(\mathbb {N}\):

The positive integers

\(\mathbb {T}\):

The circle of all phases \(\mathbb {R}/2\pi \mathbb {Z}\) (or \([0, 2\pi ]\) with \(0\equiv2\pi\))

\(\mathbb {C}\):

The complex numbers


Imaginary unit \(\sqrt{-1}\)

\(\operatorname {Re}(w )\), \(\operatorname {Im}(w )\):

Real part and imaginary part of a complex number \(w\in \mathbb {C}\)


Complex conjugate of \(w\in \mathbb {C}\)


Number of oscillator populations in the network

σ, τ:

Population indices in \(\lbrace 1,\dotsc, M\rbrace\)


Number of oscillators in each population


Oscillator indices in \(\lbrace1,\dotsc ,N\rbrace\)


Phase of oscillator k in population σ

κ, \(\kappa^{\text{GJ}}\), \(\kappa^{\mathrm {g}}\):

Coupling strength between neural oscillators


Kuramoto order parameter of population σ


The level of synchrony \(\vert Z_{\sigma} \vert \) of population σ

\(z_{\sigma}\), \(\varPsi_{\sigma}\):

Bunch variables of population σ


The time derivative \(\frac{\textrm {d}x}{\textrm {d}t}\) of x

Sinusoidally coupled phase oscillator networks

The state of each node in a phase oscillator network is given by a single phase variable. Such networks may be obtained through a phase reduction or may be abstract models in their own right as in the case of the Theta neuron below. Consider a population σ of N oscillators where the state of oscillator k is given by a phase \(\theta_{\sigma,k}\in \mathbb {T}:=\mathbb {R}/2\pi \mathbb {Z}\); if there is only a single population, we drop the index σ. Without input, the phase of each oscillator \((\sigma, k)\) advances at its intrinsic frequency \(\omega_{\sigma, k}\in \mathbb {R}\). Input to oscillator \((\sigma, k)\) is determined by a field \(H_{\sigma ,k}(t)\in \mathbb {C}\) and modulated by a sinusoidal function; this field could be an external drive or network interactions between oscillators both within population σ or other populations τ. Specifically, we consider oscillator networks whose phases evolve according to

$$ \dot{\theta}_{\sigma,k} = \omega_{\sigma, k} + \operatorname {Im}\bigl(H_{\sigma,k}(t) e^{-i\theta_{\sigma,k}} \bigr). $$

Since the effect of the field on oscillator \((\sigma, k)\) is mediated by a function with exactly one harmonic, \(e^{-i\theta_{\sigma,k}}\), we call the oscillator populations sinusoidally coupled.Footnote 4

While we allow the intrinsic frequency and the driving field to depend on the oscillator to a certain extent (i.e., oscillators are nonidentical), we will henceforth assume that all oscillators within any given population σ otherwise are (statistically) indistinguishable, i.e., the properties of each oscillator in a given population are determined by the same distribution. Specifically, suppose that the properties of each oscillator are determined by a certain parameter \(\eta_{\sigma,k}\). This is for example the case for the Theta neurons described further below, each of which has an individual level of excitability as a parameter. Let us formulate this more precisely. Suppose that we let both the intrinsic frequencies and the field be functions of this parameter, i.e., \(\omega_{\sigma, k} = \omega_{\sigma}(\eta_{\sigma,k})\), \(H_{\sigma,k}(t) = H_{\sigma}(t; \eta_{\sigma,k})\). The oscillators of a given population are then indistinguishable if, for a given population σ, all \(\eta _{\sigma,k}\) are random variables sampled from a single probability distribution with density \(h_{\sigma}(\eta)\). In the special case that \(\eta_{\sigma,k}=\eta_{\sigma,j}\) for \(j\neq k\) (in this case \(h_{\sigma}\) is a delta-distribution) we say that the oscillators are identical.

Phase oscillator networks of the form (1) include a range of well-known (and well-studied) models. These range from particular cases of Winfree’s model [79] to neuron models. In the following we discuss some important examples that we will revisit in more detail throughout this paper.

The Kuramoto model

Kuramoto originally studied synchronization in a network of N globally coupled nonidentical (but indistinguishable) phase oscillators [80]; see [81] for an excellent survey of the problem and its historical background. Kuramoto originally investigated the onset of synchronization in a network composed of only a single population of oscillators indexed by \(k\in \lbrace1, \dotsc, N\rbrace\) with phases \(\theta_{k}\) (here we drop the population index σ). The oscillator phases evolve according to

$$ \dot{\theta}_{k} = \omega_{k}+ \frac{K}{N}\sum_{j=1}^{N}\sin (\theta_{j}-\theta_{k}) $$

with distinct intrinsic frequencies \(\omega_{k}\) that are sampled from some unimodal frequency distribution. Here the parameter K is the coupling strength between oscillators and the coupling is mediated by the sine of the phase difference between oscillators. If coupling is absent (\(K=0\)), each oscillator advances with its intrinsic frequency \(\omega_{k}\).

The macroscopic state of the population is characterized by the complex-valued Kuramoto order parameterFootnote 5

$$ Z= Re^{i\phi} =\frac{1}{N}\sum _{j=1}^{N}e^{i\theta_{j}}, $$

representing the mean of all phases on the unit circle. Its magnitude \(R= \vert Z\vert \) describes the level of synchronization of the oscillator population, see Fig. 1: On the one hand, \(R=1\) if and only if all oscillators are phase synchronized, that is, \(\theta_{k}=\theta_{j}\) for all k and j; on the other hand, we have \(R=0\) if, for example, the oscillators are evenly distributed around the circle. The argument ϕ of the Kuramoto order parameter Z (which is well-defined for \(Z\neq0\)) describes the “average phase” of all oscillators, that is, it describes the average position of the oscillator crowd on the circle of phases.

Figure 1

The Kuramoto order parameter (3) encodes the level of synchrony of a phase oscillator population. The state of each oscillator is given by a phase \(\theta _{k}\) (black dot, empty arrow) on the circle \(\mathbb {T}\). Panel (a) shows a configuration with high synchrony where \(R= \vert Z\vert \lessapprox1\). Panel (b) shows two configurations with \(R= \vert Z\vert \gtrapprox0\): one where the oscillators are approximately uniformly distributed on the circle, the other one where they are organized into two clusters

Kuramoto observed the following macroscopic behavior: For K small, the system converges to an incoherent stationary state with \(R\approx 0\). As K is increased past a critical coupling strength \(K_{c}\), the system settles down to a state with partial synchrony, \(R>0\). As the coupling strength is further increased, \(K\to\infty\), oscillators become more and more synchronized, \(R\to1\).

The Kuramoto model (2) is an example of a sinusoidally coupled phase oscillator network. Using Euler’s identity \(e^{i\phi }=\cos(\phi)+i\sin(\phi)\), we have

$$\dot{\theta}_{k} = \omega_{k}+\operatorname {Im}\Biggl( \frac{K}{N}\sum_{j=1}^{N}e^{i(\theta_{j}-\theta_{k})} \Biggr) =\omega_{k}+\operatorname {Im}\bigl(KZ(t) e^{-i\theta_{k}} \bigr), $$

where the Kuramoto order parameter \(Z(t) = Z(\theta_{1}(t), \dotsc , \theta_{N}(t))\), as defined in (3), depends on time through the phases. Hence, the Kuramoto model (2) is equivalent to (1) with \(H(t)=KZ(t)\) and the interactions between oscillators are solely determined by the Kuramoto order parameter \(Z(t)\). Such a form of network interaction is also called mean-field coupling since the drive \(H(t)\) to a single oscillator is proportional to a mean field, that is, the average formed from the states of all oscillators in the network.

Problem 1

How can mean-field reductions elucidate Kuramoto’s original problem of the onset of synchronization in an infinitely large population of oscillators? We will revisit this problem in Example 1 below.

Populations of Kuramoto–Sakaguchi oscillators

Sakaguchi generalized Kuramoto’s model by introducing an additional phase-lag (or phase-frustration) parameter which approximates a time delay in the interactions between oscillators [63, 82]. While Sakaguchi originally considered a single population of oscillators, here we generalize to multiple interacting populations. Specifically, we consider the dynamics of M populations of N Kuramoto–Sakaguchi oscillators, where the phase of oscillator k in population σ evolves according to

$$ \dot{\theta}_{\sigma,k} = \omega_{\sigma,k}+\sum _{\tau=1}^{M}\frac{K_{\sigma\tau}}{ N}\sum _{j=1}^{N}\sin(\theta _{\tau,j}- \theta_{\sigma,k}-\alpha_{\sigma\tau}), $$

and where \(K_{\sigma\tau}\geq0\) is the coupling strength and \(\alpha_{\sigma\tau}\) is the phase lag between populations σ and τ.Footnote 6 The function \(g_{\sigma\tau}(\phi)=K_{\sigma\tau}\sin(\phi -\alpha_{\sigma\tau})\) mediates the interactions between oscillators, and we refer to it as the coupling function; later on we will also briefly touch upon what happens if the sine function is replaced by a general periodic coupling function. As in the Kuramoto model, an important point is that the influence between oscillators \((\tau,j)\) and \((\sigma,k)\) depends only on their phase difference (rather than explicitly on their phases).Footnote 7 Thus, this form of interaction only depends on the relative phase between oscillator pairs rather than the absolute phases. An important consequence is that the dynamics of Eqs. (4) do not change if we consider all phases in a different reference frame. For example, going into a reference frame rotating at constant frequency \(\omega _{\mathrm {f}}\in \mathbb {R}\) corresponds to the transformation \(\theta_{\sigma,k}\mapsto\theta_{\sigma,k}-\omega _{\mathrm {f}}t\), only shifts all intrinsic frequencies by \(\omega _{\mathrm {f}}\) rather than changing the dynamics qualitatively.Footnote 8

The network (4) of M interacting populations of Kuramoto–Sakaguchi oscillators is a sinusoidally coupled oscillator network. The amount of synchrony in population σ is determined by the Kuramoto order parameter (3) for population σ,

$$ Z_{\sigma}= \frac{1}{N}\sum _{j=1}^{N}e^{i\theta _{\sigma,j}}. $$

Combining coupling strength and phase lag, we define the complex interaction parameter \(c_{\sigma\tau} := K_{\sigma\tau} e^{-i\alpha _{\sigma\tau}}\) between populations σ and τ. By the same calculation as above, the network (4) is equivalent to (1) with constant intrinsic frequencies \(\omega_{\sigma,k}\) and driving field

$$\begin{aligned} H_{\sigma}= \sum_{\tau=1}^{M}c_{\sigma\tau} Z_{\tau}, \end{aligned}$$

being a linear combination of the mean fields of the other populations.

Networks of Kuramoto–Sakaguchi oscillators have been used as models for synchronization phenomena. In neuroscience, individual oscillators can represent neurons [83] or large numbers of neurons in neural masses [51, 84, 85]. In the framework of the model (4), the populations can be thought of as M neural masses. In contrast to models where neural masses only have a phase, here, the macroscopic state of each population (neural mass) is determined by an amplitude (the level of synchrony \(R_{\sigma}:= |Z_{\sigma}|\)) and an angle (the average phase \(\phi_{\sigma}:= \arg{Z_{\sigma}}\)).

Theta and Quadratic Integrate and Fire neurons

Theta neurons

The Theta neuron is the normal form of the saddle-node-on-invariant-circle (SNIC) or saddle-node-infinite-period (SNIPER) bifurcation [86] as shown in Fig. 2: At the excitation threshold, a saddle and a node coalesce on an invariant circle (i.e., limit cycle of the neuron). Its state is described by the phase \(\theta\in \mathbb {T}\) on the invariant circle and we use the conventionFootnote 9 that the neuron fires (it emits a spike) when the phase crosses \(\theta=\pi\) (Fig. 2). The Theta neuron is a valid description of the dynamics of any neuron model undergoing this bifurcation, in some parameter neighborhood of the bifurcation. The Theta neuron is also a canonical Type 1 neuron [87].

Figure 2

A Theta neuron (7) with phase \(\theta _{k}\) subject to constant input I undergoes a saddle node bifurcation on an invariant circle (SNIC) as the quantity \(\iota_{k}=\eta_{k}+\kappa I\) is varied. The neuron spikes if its phase \(\theta_{k}\) crosses \(\theta_{k}=\pi\). If \(\iota_{k}<0\) the Theta neuron is excitable: the phase will relax to the stable equilibrium and a phase perturbation of the phase across the saddle equilibrium (its threshold) will lead to a single spike before returning to equilibrium. For \(\iota_{k}>0\), the Theta neuron is spiking periodically

Consider a single population of Theta neurons (hence we drop the population index σ) whose phases evolve according to

$$ \dot{\theta}_{k}=1-\cos{\theta_{k}}+(1+\cos{ \theta_{k}}) (\eta_{k}+\kappa I), $$

where \(\eta_{k}\) is the excitability of neuron k sampled from a probability distribution, κ is the coupling strength, and I is an input current—this could result from external input (driving) or network interactions. A population of Theta neurons (7) is a sinusoidally coupled system of the form (1) with

$$\begin{aligned} \omega_{k} &= 1+\eta_{k}+\kappa I,\qquad H_{k} = i(\eta_{k}+\kappa I-1). \end{aligned}$$

The dependence of \(H, \omega\) on the excitability parameters \(\eta_{k}\) can be made explicit by writing \(\omega_{k} = \omega(\eta_{k})\), \(H(t) = H(t; \eta_{k})\). Thus, results for models of the form (1) will also apply to networks of Theta neurons.

The Theta neuron was introduced in 1986 [87] and has since then been widely used in neuroscience. We refer for example to [88, 89] for a general introduction and only list a few concrete applications here. For example, Monteforte and Wolf [90] used these neurons as canonical type I neuronal oscillators in their study of chaotic dynamics in large, sparse balanced networks. References [91, 92] considered spatially extended networks of Theta neurons and the authors were specifically interested in traveling waves of activity in these networks. More recently, other authors have used some of the techniques for dimensional reduction reviewed in the present paper to study infinite networks of Theta neurons [93, 94]. We will discuss these reduction methods in detail further below.

Problem 2

What different dynamics are possible in a single population of globally coupled Theta neurons with pulsatile coupling? What is the onset for firing of neurons? We will revisit this problem in Example 2 below.

Quadratic Integrate and Fire neurons

The Theta neuron model is closely related to the Quadratic Integrate and Fire (QIF) neuron model [95] whose state is given by a membrane voltage \(V\in(-\infty, +\infty)\). More precisely, using the transformation \(V_{k}=\tan{(\theta_{k}/2)}\) the population of Theta neurons (7) becomes a population of QIF neurons, where the membrane voltage \(V_{k}\) of neuron k evolves according to

$$ \dot{V}_{k}=V_{k}^{2}+ \eta_{k}+\kappa I. $$

Here we use the rule that the neuron fires (it emits a spike) if its voltage reaches \(V_{k}(t^{-})=+\infty\) and then the neuron is reset to \(V_{k}(t^{+})=-\infty\).

QIF neurons have been widely used in neuroscientific modeling; see [88, 89] for a general introduction and [9699] for a few examples in the literature where QIF neurons are employed. They have the simplicity of the more common leaky integrate-and-fire model in the sense of having only one state variable (the voltage), but are more realistic in the sense of actually producing spikes in the voltage trace \(V(t)\).

Problem 3

How does a network of neurons respond to a transient stimulus? Specifically, if this neuronal network is modeled by a heterogeneous network of all-to-all coupled QIF neurons. This is a pertinent question, for example, if stimulation is used for therapeutic purposes such as in Deep Brain Stimulation. We will revisit this problem in Example 3 below.

Exact mean-field descriptions for sinusoidally coupled phase oscillators

In this section, we review how sinusoidally coupled phase oscillator networks (1) can be simplified using mean-field reductions. Under specific assumptions (detailed further below) we derive low-dimensional system of ordinary differential equations for macroscopic mean-field variables that describe the evolution of sinusoidally coupled phase oscillator networks (1) exactly. This is in contrast to reductions that are only approximate or only valid over short time scales. Thus, these reduction methods facilitate the analysis of the network dynamics: rather than looking at a complex, high-dimensional network dynamical system (or its infinite-dimensional mean-field limit) we can analyze simpler, low-dimensional equations. For example, for the infinite-dimensional limit of the Kuramoto model, we obtain a closed system for the evolution of Z, a two-dimensional system (since Z is complex). While the Kuramoto model is particularly simple, the methods apply for general driving fields \(H_{\sigma,k}\) that could contain delays or depend explicitly on time. We give concrete examples in Sect. 4 below, where we apply the reduction techniques.

Importantly, these mean-field reductions also apply to oscillator networks which are equivalent to (1). In particular, this applies to neural oscillators: The QIF neuron and the Theta neuron are equivalent as discussed above. Consequently, rather than assuming a model for a neural population (e.g., [51]), we actually obtain an exact description of interacting neural populations in terms of their macroscopic (mean-field) variables.

Ott–Antonsen reduction for the mean-field limit of nonidentical oscillators

The Ott–Antonsen reduction applies to the mean-field limit of populations of indistinguishable sinusoidally coupled phase oscillators (1). First, we first outline the basic steps to derive the equations and highlight the assumptions made along the way; this section contains mathematical details and may be omitted on first reading. We then summarize the Ott–Antonsen equations for the models described in the previous section.

*Derivation of the reduced equations

Consider the dynamics of the (mean-field) limit of (1) with infinitely many oscillators, \(N\to\infty \). Note that while the population index σ is seen as discrete in this paper, it is also possible to apply the reduction to continuous topologies of populations such as rings; cf. [100, 101]. To simplify the exposition, we consider the classical case where the intrinsic frequency is the random parameter, \(\omega_{\sigma,k}=\eta_{\sigma, k}\), and that the driving field is the same for all oscillators in any population, \(H_{\sigma,k} = H_{\sigma}\); for details on systems with explicit parameter dependence (such as Theta neurons) see [102, 103]. Hence, suppose that the intrinsic frequencies \(\omega_{\sigma,k}\) are randomly drawn from a distribution with density \(h_{\sigma}(\omega)\) on \(\mathbb {R}\). In the mean-field limit, the state of each population at time t is not given by a collection of oscillator phases, but rather by a probability density \(f_{\sigma}(\omega, \vartheta ; t)\) for an oscillator with intrinsic frequency \(\omega\in \mathbb {R}\) to have phase \(\vartheta \in \mathbb {T}\); see [104] for general properties of such distributions and statistics on the circle. For a set of phases \(B\subset \mathbb {T}\) the marginal \(\int_{B}\int_{\mathbb {R}}f_{\sigma}(\omega, \vartheta ; t)\,\textrm {d}\omega \,\textrm {d}\vartheta \) determines the fraction of oscillators whose phase is in B at time t. Moreover, we have \(\int_{\mathbb {T}}f_{\sigma}(\omega, \vartheta ; t)\,\textrm {d}\vartheta = h_{\sigma}(\omega)\) for all times t by our assumption that the intrinsic frequencies do not change over time.

Conservation of oscillators implies that the dynamics of the mean-field limit of (1) is given by the transport equationFootnote 10

$$\begin{aligned} \frac{\partial f_{\sigma}}{\partial t} + \frac{\partial}{\partial \vartheta } (v_{\sigma}f_{\sigma})&=0 \quad\text{with } v_{\sigma}=\omega_{\sigma}+ \operatorname {Im}\bigl(H_{\sigma}(t)e^{-i\vartheta } \bigr). \end{aligned}$$

Because oscillators are conserved,Footnote 11 the change of the phase distribution over time is determined by the change of phases given by the velocity \(v_{\sigma}\) through (1) at time t of an oscillator with phase ϑ and intrinsic frequency ω. While the transport equation for the mean-field limit originally appears in Refs. [105, 106], it can be rigorously derived from a measure-theoretic perspective as a Vlasov limit [107].

Before we discuss how to find solutions for the transport equation (10), it is worth noting that it has been analyzed directly in the context of functional analysis for networks of Kuramoto oscillators. Stationary solutions of (10) and their stability have been studied recently in the context of all-to-all coupled networks of Kuramoto oscillators [108112]. Taking the mean-field limit for \(N\to\infty\) depends on the homogeneity of the network. For certain classes of structured networks—networks on convergent families of random where a limiting object (a graphon) can be defined as the number of nodes \(N\to \infty\)—it is possible to define and analyze the dynamics of the resulting continuum limit [113, 114].

Ott and Antonsen [115] showed that there exists a manifold of invariant probability densities for the transport equation (10). Specifically, if \(f_{\sigma}(\vartheta ,\omega;0)\) is on the manifold, so will the density \(f_{\sigma}(\vartheta ,\omega;t)\) for any time \(t\geq0\). Let

$$\begin{aligned} Z_{\sigma}&:= \int_{-\infty}^{\infty}\int_{-\pi}^{\pi}f_{\sigma}(\vartheta , \omega;t)e^{i\vartheta }\,\textrm {d}\vartheta \,\textrm {d}\omega \end{aligned}$$

denote the Kuramoto order parameter (3) in the mean-field limit. We will see below that the evolution on the invariant manifold is now described by a simple ordinary differential equation for \(Z_{\sigma}\) for each population σ.

In the following we outline the key steps to derive a set of reduced equations and refer to [115117] for further details. Let denote the complex conjugate of \(w\in \mathbb {C}\). Suppose that \(f_{\sigma}(\vartheta ,\omega;t)\) can be expanded in a Fourier series in the phase angle ϑ of the form

$$\begin{aligned} f_{\sigma}(\vartheta ,\omega;t) &= \frac{h_{\sigma}(\omega)}{2\pi} \bigl( 1 + f^{+}_{\sigma}+ \bar{f}^{+}_{\sigma}\bigr) \quad \text{where} \quad f^{+}_{\sigma}= \sum_{n=1}^{\infty}\ f^{(n)}_{\sigma }(\omega,t) e^{i n\vartheta }. \end{aligned}$$

Here it is assumed that \(f^{+}_{\sigma}\) has an analytic continuation into the lower complex half plane \(\lbrace \operatorname {Im}(\omega )<0 \rbrace\) (and \(f^{-}_{\sigma}:=\bar{f}^{+}_{\sigma}\) into \(\lbrace \operatorname {Im}(\omega )>0 \rbrace\)); even with this assumption we can solve a large class of problems, but it poses a restriction to a number of practical cases discussed in Sect. 3.3 below. Ott and Antonsen now imposed the ansatz that Fourier coefficients are powers of a single function\(a_{\sigma}(\omega,t)\),

$$\begin{aligned} f^{(n)}_{\sigma}(\omega,t) &= \bigl( a_{\sigma}(\omega ,t) \bigr)^{n}. \end{aligned}$$

If \(\vert a_{\sigma}(\omega,t) \vert <1\) this ansatz is equivalent to the Poisson kernel structure for the unit disk, \(f^{+}_{\sigma} = (a_{\sigma}e^{i\vartheta })/(1-a_{\sigma}e^{i \vartheta })\). Substitution of (12) into (10) yields

$$\begin{aligned} \frac{\partial a_{\sigma}}{\partial t} +i\omega a_{\sigma}+ \frac{1}{2} \bigl(H_{\sigma} a_{\sigma}^{2}- \bar{H}_{\sigma}\bigr)&= 0, \end{aligned}$$

Thus, the ansatz (13) reduces the integro partial differential equation (10) to a single ordinary differential equation in \(a_{\sigma}\) for each population σ. (More precisely, there is an infinite set of such equations, one for each ω with identical structure.) Finally, with (13) we obtain

$$\begin{aligned} Z_{\sigma}&= \int_{-\infty}^{\infty}\bar{a}_{\sigma}( \omega ,t) h_{\sigma}(\omega) \,\textrm {d}\omega, \end{aligned}$$

which relates \(a_{\sigma}\) and the order parameter \(Z_{\sigma}\) in (11).

Assuming analyticity, this integral may be evaluated using the residue theorem of complex analysis.Footnote 12 These equations take a particularly simple form if the distribution of intrinsic frequencies \(h_{\sigma}(\omega)\) is Lorentzian with mean \(\hat {\omega }_{\sigma}\) and width \(\Delta_{\sigma}\), i.e.,

$$\begin{aligned} h_{\sigma}(\omega)&= \frac{1}{\pi}\frac{\Delta_{\sigma}}{(\omega -\hat {\omega }_{\sigma})^{2}+\Delta_{\sigma}^{2}}, \end{aligned}$$

since \(h_{\sigma}(\omega)\) has two simple poles at \(\hat {\omega }_{\sigma}\pm i\Delta_{\sigma}\) and thus (15) gives \(Z_{\sigma}=\bar{a}_{\sigma}(\hat {\omega }_{\sigma}-i\Delta_{\sigma},t)\) under the assumption \(|a_{\sigma}(\omega,t)|\to0\) as \(\operatorname {Im}(\omega )\to-\infty\). As a result, we obtain the two-dimensional differential equation—the Ott–Antonsen equations for a Lorentzian frequency distribution—for the order parameter in population σ,

$$\begin{aligned} \dot{Z}_{\sigma}&= (-\Delta_{\sigma}+i \hat {\omega }_{\sigma})Z_{\sigma}+\frac {1}{2}H_{\sigma}- \frac{1}{2}\bar{H}_{\sigma}{ Z}_{\sigma}^{2}. \end{aligned}$$

We note that this reduction method also works for other frequency distributions \(h_{\sigma}\), as outlined in [117]. However, the resulting mean-field equation will not always be a single equation but could be a set of coupled equations. For example, for multi-modal frequency distributions \(h_{\sigma}\) the Ott–Antonsen equations will have an equation for each mode; see [103, 118, 119] and the discussion below.

The derivation above only states that there exists an invariant manifold of densities \(f_{\sigma}\) for the transport equation (10). What happens to densities \(f_{\sigma}\) that are not on the manifold as time evolves? Under some assumptions on the distribution in intrinsic frequencies \(h_{\sigma}\), Ott and Antonsen also showed in [116] that there are densities \(f_{\sigma}\) that are attracted to the invariant manifold. In other words, the dynamics of the Ott–Antonsen equations capture the long-term dynamics of a wider range of initial phase distributions \(f_{\sigma}(\vartheta ,\omega ;0)\), whether they satisfy (13) initially or not. We discuss this in more detail below.

Ott–Antonsen equations for commonly used oscillator models

We now summarize the Ott–Antonsen equations (OA) for the commonly used oscillator models described in the Sect. 2. Here we focus on Lorentzian distributions of the intrinsic frequencies or excitabilities; for Ott–Antonsen equations for other parameter distributions such as normal or bimodal distributions see [115, 118].

The Kuramoto model

Consider the mean-field limit of the Kuramoto model (2) with a Lorentzian distribution of intrinsic frequencies. Recall that the driving field for the Kuramoto model was \(H(t)=KZ(t)\). Substituting this into (OA) we obtain Ott–Antonsen equations for the Kuramoto model

$$\begin{aligned} \dot{Z}&= (-\Delta+i\hat {\omega })Z+\frac{K}{2}Z\bigl(1 - \vert Z\vert ^{2} \bigr), \end{aligned}$$

a two-dimensional system of equations since Z is complex-valued.

Kuramoto–Sakaguchi equations

For the Kuramoto–Sakaguchi equations (4) the driving field is a weighted sum of the individual population order parameters (6). Assuming a Lorentzian distribution of intrinsic frequencies with mean \(\hat {\omega }_{\sigma}\) and width \(\Delta _{\sigma}\) for each population \(\sigma\in \lbrace1,\ldots ,M\rbrace\), we obtain from (OA) the Ott–Antonsen equations for coupled populations of Kuramoto–Sakaguchi oscillators,

$$ \dot{Z}_{\sigma}= (-\Delta_{\sigma}+i \hat {\omega }_{\sigma})Z_{\sigma}+ \frac {1}{2} \Biggl(\sum _{\tau=1}^{M}c_{\sigma\tau} Z_{\tau}-Z_{\sigma}^{2}\sum _{\tau=1}^{M}\bar{c}_{\sigma\tau}\bar{Z}_{\tau}\Biggr). $$

In other words, the Ott–Antonsen equations are a 2M-dimensional system that describe the interactions of the order parameters \(Z_{\sigma}\).

Networks of Theta neurons

Consider a single population of Theta neurons with drive \(I(t)\) given by (7) with parameter-dependent intrinsic frequencies and driving field (8); we omit the population index σ. Assume that the variations in excitability \(\eta_{k}\) are chosen from a Lorentzian distribution mean η̂ and width Δ. We obtain the Ott–Antonsen equations for the mean-field limit of a population of Theta neurons (8)

$$ \dot{Z}=\frac{1}{2} \bigl((i\hat {\eta }-\Delta) (1+Z)^{2}-i(1- Z)^{2} \bigr)+\frac{1}{2}i(1+Z)^{2}\kappa I. $$

Note that in contrast to (18), this is not a closed set of equations yet as the exact form of the input current is still unspecified. We will close these equations in Sect. 4.2.1 below by writing I in terms of Z for different types of neural interactions.

The order parameter for the Theta neuron directly relates to quantities with a physical interpretation such as the average firing rate of the network. Integrating the phase distribution (12) over the excitability parameter η under assumption (13) we obtain the distribution of all phases,

$$ p(\theta,t)=\frac{1}{2\pi} \biggl(\frac{1- \vert Z\vert ^{2}}{1-Ze^{-i\theta}-\bar{Z}e^{i\theta}+ \vert Z\vert ^{2}} \biggr) = \frac{1}{2\pi} \operatorname {Re}\biggl(\frac{1+\bar{Z}e^{i\theta}}{1-\bar {Z}e^{i\theta}} \biggr), $$

where Z may be a function of time. This distribution can be used to determine the probability that a Theta neuron has phase θ. Since a Theta neuron fires when its phase crosses \(\theta=\pi\), the average firing rate \(r(t)\) of the network at time t is the flux through \(\theta=\pi\), i.e.,

$$ r(t)=\bigl(p(\theta,t)\dot{\theta}\bigr)\big|_{\theta=\pi}= \frac{1}{\pi} \operatorname {Re}\biggl(\frac{1-\bar{Z}(t)}{1+\bar{Z}(t)} \biggr). $$

Here we used that \(\dot{\theta}|_{\theta=\pi}=2\) by (7), independent of θ. The same result is obtained from the firing rate equations of the QIF neuron as we explain in the next paragraph.

Ott–Antonsen reduction for equivalent networks

The mean-field reductions are also valid for systems that are equivalent to a network of sinusoidally coupled phase oscillators (1). As an example, we discussed the relationship between QIF and Theta neurons above via the transformation \(V=\tan{(\theta/2)}\), which carries over to the mean-field limit of infinitely many neurons where the Ott–Antonsen equations apply. More specifically, this transformation converts the distribution of phases (20) into a distribution

$$ \tilde{p}(V,t)=\frac{X(t)}{\pi((V-Y(t))^{2}+X^{2}(t))} $$

of voltages where \(Z = (1-\bar{W})/(1+\bar{W})\) and \(W=X+iY\) and \(X,Y\in\mathbb{R}\). Equation (22) is called the Lorentzian ansatz in [102]. Importantly, the quantity W is obtained from a conformal transformation of the order parameter Z. This allows one to convert the Ott–Antonsen equations for the Theta neurons (19) to an equation for the mean field \(W=(1-\bar{Z})/(1+\bar{Z})\), given by

$$ \dot{W} = i\hat {\eta }+\Delta-iW^{2}+iI, $$

which describes the QIF neurons. The advantage of this formulation is that both the real and imaginary parts of W have physical interpretations: \(Y(t)\) is the average voltage across the network and \(X(t)\) relates to the firing rate r of the population, i.e., the flux at \(V=\infty\), since \(\lim_{V\rightarrow\infty}\tilde{p}(V,t)\dot{V}(t) = X(t)/\pi=r\) [102].

Watanabe–Strogatz reduction for identical oscillators

Mean-field reductions are possible for both finite and infinite networks for populations of identical oscillators. These reductions are due to the high level of degeneracy in the system, i.e., there are many quantities that are conserved as time evolves. This degeneracy was first observed in the early 1990s for coupled Josephson junction arrays [120], which relate directly to Kuramoto’s model of coupled phase oscillators [121]. Watanabe and Strogatz [122, 123] were able to calculate the preserved quantities explicitly using a clever transformation of the phase variables, thereby reducing the Kuramoto model from the N oscillator phases to three time-dependent (mean-field) variables together with \(N-3\) constants of motion. In terms of mathematical theory, the degeneracy originates from restrictions imposed by the algebraic structure of the equations [124126] which is still an area of active research [127, 128].

The Watanabe–Strogatz reduction applies for sinusoidally coupled phase oscillator populations where oscillators within populations are identical, i.e., all oscillators have the same intrinsic frequency, \(\omega_{\sigma,k} = \omega_{\sigma}\), and are driven by the same field \(H_{\sigma,k} = H_{\sigma}\). Indeed, Watanabe–Strogatz and Ott–Antonsen reductions have been shown to be intricately linked [125, 129] as we briefly discuss below. Here, we focus on finite networks for simplicity. In the following section we give the equations in generality and give some mathematical detail. Then, the equations are subsequently stated for the commonly used oscillator models discussed above.

*Constants of motion yield reduced equations

The dynamics of a finite population (1) with \(N>3\) identical oscillators can be described exactly in terms of three macroscopic (mean-field) variables [122, 123, 129, 130]: the bunch amplitude \(\rho_{\sigma}\), bunch phase \(\varPhi_{\sigma}\), and phase distribution variable \(\varPsi_{\sigma}\). Similar to the modulus and phase of the Kuramoto order parameter \(Z_{\sigma}=R_{\sigma}e^{i\phi_{\sigma}}\), the bunch amplitude \(\rho _{\sigma}\) and bunch phase \(\varPhi_{\sigma}\) characterize synchrony (or equivalently, the maximum of the phase distribution); while \((R_{\sigma}, \phi_{\sigma})\) and \((\rho_{\sigma},\varPhi_{\sigma})\) do not coincide in general, they do if the population is fully synchronized. The phase distribution variable \(\varPsi_{\sigma}\) determines the shift of individual oscillators with respect to \(\varPhi_{\sigma}\) as illustrated in Fig. 3.

Figure 3

Illustration of the bunch variables in the Watanabe–Strogatz formalism. Just like the Kuramoto order parameter \(Z_{\sigma}\), the bunch amplitude and bunch phase in \(z_{\sigma}=\rho_{\sigma}e^{i \varPhi _{\sigma}}\) characterize the level of synchrony. The quantities \(Z_{\sigma}\) and \(z_{\sigma}\) do however only coincide if the population is fully synchronized or for uniformly distributed constants of motion in the limit \({N\rightarrow\infty}\) (see text). The phase distribution variable \(\varPsi_{\sigma}\) is related to the shift and distribution of individual oscillators with respect to \(\varPhi_{\sigma}\)

For a population of sinusoidally coupled phase oscillators (1) with driving field \(H_{\sigma}=H_{\sigma}(t)\) the macroscopic variables evolve according to the Watanabe–Strogatz equations

$$\begin{aligned}& \dot{\rho}_{\sigma}= \frac{1-\rho_{\sigma}^{2}}{2}\operatorname {Re}\bigl(H_{\sigma}e^{-i\varPhi_{\sigma}} \bigr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPhi}_{\sigma}= \omega_{\sigma}+ \frac {1+\rho_{\sigma}^{2}}{2\rho_{\sigma}} \operatorname {Im}\bigl(H_{\sigma}e^{-i\varPhi _{\sigma}} \bigr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPsi}_{\sigma}= \frac{1-\rho_{\sigma}^{2}}{2\rho_{\sigma}} \operatorname {Im}\bigl(H_{\sigma}e^{-i\varPhi_{\sigma}} \bigr). \end{aligned}$$

Mathematically speaking, the reduction to three variables means that the phase space \(\mathbb {T}^{N}\) of (1) is foliated by 3-dimensional leafs, each of which is determined by constants of motion, \(\psi^{(\sigma)}_{k}\), \(k=1, \dotsc, N\) (\(N-3\) are independent). In other words, the choice of constants of motion determines a specific 3-dimensional invariant subset on which the macroscopic variables evolve. The Watanabe–Strogatz equations arise from the properties of Riccati equations and the bunch variables are parameters of a family of Möbius transformations which determine the system’s dynamics; see [125128] for more details on the mathematics behind these equations.

From a practical point of view, two things are needed to use the Watanabe–Strogatz equations (WS) to understand oscillator networks of the form (1). First, since the driving field H is often a function of the population order parameters \(Z_{\tau}\), \(\tau =1,\ldots, M\), we need to translate \(Z_{\sigma}\) into the bunch variables to get a closed set of equations. Write \(z_{\sigma}:=\rho_{\sigma}e^{i\varPhi_{\sigma}}\). As shown for example in [129], we have

$$\begin{aligned} Z_{\sigma}&= z_{\sigma}\gamma_{\sigma}\quad\text{where } \gamma _{\sigma}( \rho_{\sigma},\varPsi_{\sigma}) = \frac{1}{N\rho _{\sigma}}\sum _{j=1}^{N}\frac{\rho_{\sigma}e^{i\varPsi_{\sigma}} + e^{i\psi^{(\sigma)}_{j}}}{e^{i\varPsi_{\sigma}} + \rho_{\sigma}e^{i\psi^{(\sigma)}_{j}}}. \end{aligned}$$

Second, one needs to determine the constants of motion from the initial phases \(\theta_{\sigma,k}(0)\): A possible choice is to set \(\psi ^{(\sigma)}_{k}:=\theta_{\sigma,k}(0)\) and \(\rho_{\sigma}(0)=\varPhi _{\sigma}(0)=\varPsi_{\sigma}(0)=0\); see [123] for a detailed discussion and different way to choose initial conditions that avoids the singularity at \(\rho_{\sigma}=0\). Taken together, the dynamics of individual oscillators (1) are now determined by (WS) via (25) and vice versa.

The relationship (25) between the bunch variables and the order parameter also indicates how the Watanabe–Strogatz equations and the Ott–Antonsen equations are linked. Pikovsky and Rosenblum [130] showed that for constants of motion that are uniformly distributed on the circle, \(\psi^{(\sigma)}_{k} = 2\pi k/N\), we have \(\gamma_{\sigma}\to1\) as \(N\to\infty\). Consequently, \(Z_{\sigma}=z_{\sigma}\) for such a choice of constants of motion in the limit of infinitely many oscillators. For the Kuramoto model with \(H_{\sigma}= Z_{\sigma}\), Eqs. (WSa) and (WSb) depend on Ψ only through γ. Thus, for constant \(\gamma=1\) Eqs. (WSa) and (WSb) decouple from (WSc). These two equations are equivalent to the Ott–Antonsen equations (OA) in the mean-field limit for identical oscillators. To summarize, the dynamics of the mean-field limit for identical oscillators is given by the Watanabe–Strogatz equations together with a distribution of constants of motion. For the particular choice of a uniform distribution of constants of motion, the equations decouple and the effective dynamics are given by the Ott–Antonsen equations.

Watanabe–Strogatz equations for commonly used oscillator models

We now summarize the Watanabe–Strogatz equations (WS) for the commonly used oscillator models described in Sect. 2.

Kuramoto–Sakaguchi equations

For the Kuramoto–Sakaguchi model (4), the driving field H is a linear combination of the order parameters, \(H_{\sigma}= \sum_{\tau =1}^{M}c_{\sigma\tau} Z_{\tau}\). Assuming that the oscillators within each population are identical, \(\omega_{\sigma,k}=\omega _{\sigma}\), the dynamics are governed by the Watanabe–Strogatz equations for coupled Kuramoto–Sakaguchi populations,

$$\begin{aligned}& \dot{\rho}_{\sigma}= \frac{1-\rho_{\sigma}^{2}}{2}\operatorname {Re}\Biggl(\sum_{\tau=1}^{M}c_{\sigma\tau} \gamma_{\tau}\rho _{\tau} e^{i(\varPhi_{\tau}-\varPhi_{\sigma})} \Biggr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPhi}_{\sigma}= \omega_{\sigma}+ \frac{1+\rho _{\sigma}^{2}}{2\rho_{\sigma}} \operatorname {Im}\Biggl(\sum_{\tau=1}^{M}c_{\sigma\tau}\gamma_{\tau}\rho_{\tau} e^{i(\varPhi_{\tau}-\varPhi_{\sigma})} \Biggr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPsi}_{\sigma}= \frac{1-\rho_{\sigma}^{2}}{2\rho_{\sigma}} \operatorname {Im}\Biggl(\sum_{\tau=1}^{M}c_{\sigma\tau } \gamma_{\tau}\rho_{\tau} e^{i(\varPhi_{\tau}-\varPhi_{\sigma})} \Biggr). \end{aligned}$$

Networks of Theta neurons

For a finite population of identical Theta neurons (8) with identical excitability η and input current \(I(t)\) the Watanabe–Strogatz equations for identical Theta neurons [131] evaluate to

$$\begin{aligned}& \dot{\rho}= \frac{1-\rho^{2}}{2}\operatorname {Re}\bigl(i(\eta +\kappa I-1)e^{-i\varPhi} \bigr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPhi}= 1+\eta+\kappa+\frac{1+\rho ^{2}}{2\rho} \operatorname {Im}\bigl(i( \eta+\kappa I-1)e^{-i\varPhi} \bigr), \end{aligned}$$
$$\begin{aligned}& \dot{\varPsi}= \frac{1-\rho^{2}}{2\rho} \operatorname {Im}\bigl(i(\eta+\kappa I-1)e^{-i\varPhi} \bigr). \end{aligned}$$

Note that, as for the Ott–Antonsen reduction above, one still needs to close this system by writing I in terms of the bunch variables in (WS) and the constants of motion. This is not straightforward and requires a considerable amount of computations [131].

Reductions for equivalent networks

For a finite network of identical QIF neurons governed by (9) with \(\eta_{j}=\eta\) for all j, the transformation \(V=\tan {(\theta/2)}\) converts this network into a network of identical Theta neurons (7). Consequently, such a network will also be described by equations of the form (27a)–(27c). As mentioned above, in the limit \(N\rightarrow\infty\) and equally spaced constants of motion, Eq. (27c) will decouple from (27a) and (27b). In this case, writing \(z=\rho e^{i\varPhi}\) we find that z satisfies (19) or equivalently (23) (with \(\hat {\eta }=\eta\) and \(\Delta=0\)).

Limitations and challenges

Before we apply the mean-field reductions to particular oscillator networks in the next section, some (mathematical) comments on the limitations of these approaches are in order.

The main assumption behind the reduction methods is that network interactions are mediated by a coupling function with a single harmonic (of arbitrary order). There are explicit examples [132134] that show that the reductions, as described above, become invalid. For example chaotic dynamics may occur where the reduction would yield an effective two-dimensional phase space; we discuss this example below. This does not mean that the reductions break down completely, and there may still be some degeneracy in the system if the interaction is of a specific form; see [135] for a more detailed discussion. It remains a challenge to identify what part of the mean-field reduction (if any) remains valid for more general interaction functions and phase response curves.

The Ott–Antonsen reduction for the mean-field limit allows for the oscillators to be nonidentical. By contrast, the Watanabe–Strogatz reduction of finite networks requires oscillators to be identical. Neither of these approaches applies to finite networks of nonidentical oscillators, and understanding such networks remains a challenge. Direct numerical simulations to elucidate the dynamics of networks of N almost identical oscillators are challenging as one needs to integrate an almost integrable dynamical system.Footnote 13 There has also been some recent progress analyzing situations in which the Ott–Antonsen or Watanabe–Strogatz equations do not apply. First, a perturbation theory for the exact mean-field equations has been developed to elucidate the dynamics for systems that are close to sinusoidally coupled, for example if there are very weak higher-harmonics in the interaction function [136]. Second, while not an exact representation of the dynamics, the collective coordinates approach by Gottwald and coworkers [76, 137, 138] has been instructive to gain insights into the dynamics of finite networks of nonidentical oscillators.

Finally, Ott and Antonsen showed that the manifold of oscillator densities \(f_{\sigma}\) on which the reduction holds is attracting [116]. Their method of proof has been shown to apply to a wider class of systems [103]. As pointed out by Mirollo [139] and later elaborated further [128], their proof is based on a strong smoothness assumption on the density \(f_{\sigma}\) which implies limitations to this approach. More precisely, to be able to evaluate contour integrals using the residue theorem, it is typically assumed that the integrand in (15), containing the intrinsic frequency distribution \(h_{\sigma}\) and the density \(f_{\sigma}\), is holomorphic. In particular, this assumption is only valid for distributions \(h_{\sigma}\) that allow for arbitrarily large (or small) intrinsic frequencies with nonzero probability: The identity theorem for holomorphic functions implies that \(h_{\sigma}(\omega)>0\) for all \(\omega\in \mathbb {R}\). Any distribution for which the intrinsic frequencies are bound to a finite interval—the intrinsic frequencies of any finite collection of oscillators will lie in a finite interval—are excluded.Footnote 14 Hence, while the manifold described by Ott and Antonsen attracts some class of oscillator densities, it is not clear how large this class actually is (it does not include δ-distributions where all oscillators have the same phase). Put differently, it is important to explicitly characterize the space of densities in which the Ott–Antonsen manifold is attracting.

Dynamics of coupled oscillator networks

We now discuss global synchrony and synchrony patterns in phase oscillator networks, and highlight how the reductions presented in the previous section simplify their analysis. While we indicate along the way how most of these systems are relevant from the point of view of biology and neuroscience, we here take a predominantly dynamical systems perspective and highlight the applicability of, for example, bifurcation theory [61, 140]. We focus on a small number of coupled populations of oscillators, which can be seen as building blocks for larger models consisting of many coupled populations (e.g., regions of interest in a whole-brain model as discussed in Sect. 5 below).

Networks of Kuramoto-type oscillators

We first consider networks of Kuramoto–Sakaguchi and related Kuramoto-type oscillators. Despite their simplicity, they have found widespread application, for example in neuroscience, as outlined in Sect. 2.2, to understand synchronization phenomena. The network interactions of such oscillators depend on phase differences. Bifurcations may occur as one introduces an explicit phase dependency to the coupling [141] such as in the networks of Theta neurons which we discuss in the following section.

One oscillator population

Example 1

We first revisit Kuramoto’s original problem (see Problem 1 in Sect. 2.1 above) from the perspective of mean-field reductions: Given a globally coupled network of Kuramoto oscillators (2) with a Lorentzian distribution of intrinsic frequencies, what is the critical coupling strength \(K_{c}\) where oscillators start to synchronize?

This problem is surprisingly easy to solve in the mean-field limit \(N\to\infty\) using the Ott–Antonsen reduction. Assume that the distribution of intrinsic frequencies is a Lorentzian with mean ω̂ and width Δ. Recall that the order parameter Z evolves according to the Ott–Antonsen equation (17): Separating (17) for \(Z= Re^{i\phi }\) into real and imaginary parts yields

$$\begin{aligned}& \dot{R} = \biggl(-\Delta+\frac{K}{2}-\frac{K}{2}R^{2} \biggr)R, \end{aligned}$$
$$\begin{aligned}& \dot{\phi}= \hat {\omega }. \end{aligned}$$

Moreover, the manifold on which (17) describes the mean-field limit of (2) attracts initial phase distributions. Since the equation for the mean phase ϕ is completely uncoupled, it suffices to analyze (28a). Thus, Kuramoto’s problem in the infinite-dimensional mean-field limit reduces to solving the one-dimensional real ordinary differential equation (28a): By elementary analysis, we find that the equilibrium \(R=0\) is stable for \(K< K_{c}=2\Delta\) and loses stability in a pitchfork bifurcation where the solution \(R =\sqrt{1-2\Delta/K}>0\) becomes stable. The same analysis applies to the Kuramoto–Sakaguchi network (4) with \(M=1\) for phase lag \(\alpha\in (-\frac{\pi}{2}, \frac{\pi}{2} )\) with K replaced by \(K\cos(\alpha)\) in (17); note that for phase lag \(\sin{\alpha}\neq0\) we have \(\dot{\phi}=\hat {\omega }+K\sin{(\alpha)}R(1-R^{2})\) so that the frequency now depends nontrivially on R.

Global synchronization of finite networks of identical Kuramoto–Sakaguchi oscillators is readily analyzed using the Watanabe–Strogatz reduction. As above, a phase variable decouples and we obtain a two-dimensional system which describes the dynamics of (4) for \(M=1\). Its analysis [122] shows that the system will synchronize perfectly, \(R\to1\) as \(t\to\infty\), for \(\alpha\in (-\frac {\pi}{2}, \frac{\pi}{2} )\) (attractive coupling) and converge to an incoherent equilibrium, \(R\to0\) as \(t\to\infty\), for \(\alpha\in (\frac{\pi}{2}, \frac{3\pi}{2} )\) (repulsive coupling). In the marginal case of \(\cos(\alpha) = 0\) the system is Hamiltonian [123].

Multimodal distributions in the Kuramoto model

While Kuramoto’s original model considered a single oscillator population with unimodally distributed frequencies—such as the Lorentzian distribution—Kuramoto also speculated on what dynamic behaviors a network consisting of a single population would exhibit if the distribution of natural frequencies was instead bimodal [80]: Depending on the coupling strength, the width and spacing of the peaks of the frequency distribution, oscillators may either aggregate and form a single crowd of oscillators, thus forming one “giant oscillator,” or disintegrate into two mutually unlocked crowds, corresponding to two giant oscillators.

Crawford analyzed this case rigorously for the weakly nonlinear behavior near the incoherent state using center manifold theory [142] and thus explained local bifurcations in the neighborhood of the incoherent state. Using the Ott–Antonsen reduction, Martens et al. [118] obtained exact results on all possible bifurcations and the bistability between incoherent, partially synchronized, and traveling wave solutions. Similarly, rather than superimposing two unimodal frequency distributions, Pazó and Montbrió [119] considered a modified model where the distribution of intrinsic frequencies is the difference of two Lorentzians; this allows for the central dip to become zero.Footnote 15

Interestingly, to describe a single population with an m-modal frequency distribution using the Ott–Antonsen reduction, one obtains a set of m coupled ordinary differential equations. This set describes the oscillator dynamics of m order parameters (11) associated with each peak of the m-modes, resulting in collective behavior where oscillators either aggregate to a single or potentially up to m groups of oscillators. The question arises as to whether the resulting set of equations can be related to M-population models as described by (4). This question was picked up by Pietras and Daffertshofer [143] who showed that the dynamical equations describing \(M=1\) population with a bimodal distribution can be mapped to \(M=2\) populations (4) with nonidentical coupling strengths \(K_{\sigma\tau}\) with equivalent bifurcations. However, this equivalence breaks down for \(M=3\) populations and trimodal distributions.

Higher-order and nonadditive interactions

Note that networks of Kuramoto–Sakaguchi oscillators (4) make two important assumptions on the network interactions. First, the interactions are sinusoidal, as discussed above, since the coupling function has a single harmonic. Second, the network interactions are additive [72, 144], that is, the interaction of two distinct oscillators on a third is given by the sum of the individual interactions. By contrast, coupling between oscillatory units generically contains nonlinear (nonadditive) interactions; concrete examples include oscillator networks [145], interactions in ecological networks [146], and nonlinear dendritic interactions between neurons [147149]. For weakly coupled oscillator networks, higher-order interaction terms include higher harmonics in the coupling function as well as coupling terms which depend nonlinearly on three or more oscillator phases [150]. Such terms naturally arise in phase reductions: If the interaction between the nonlinear oscillators is generic, Ashwin and Rodrigues [151] calculated these corresponding higher-order interaction terms explicitly for a globally coupled network of symmetric oscillators close to a Hopf bifurcation. Moreover, higher-order interactions in the effective phase dynamics can also arise for additively coupled nonlinear oscillators [152], for example in higher-order phase reductions [153]. Nonadditive interactions can be exploited for applications, such as to build neurocomputers [154].

The mean-field reductions here can be used to analyze networks with particular type of higher-order interactions. For example, Skardal and Arenas [155] consider a single globally coupled population of indistinguishable oscillators where the pure triplet interactions of the form \(\sin(\theta_{l}+\theta_{j}-2\theta_{k})\) determines the joint influence of oscillators j, l onto oscillator k. In the mean-field limit, they find multistability and hysteresis between incoherent and partially synchronized attractors. In general, however, higher-order interaction terms lead to phase oscillator networks where the mean-field reductions cease to apply [134].


Much progress has been made to understand synchronization and more complicated collective dynamics in globally coupled networks of Kuramoto oscillators and their generalizations; see [141, 156, 157] for surveys. While we discussed Kuramoto’s problem as an example, the same methods apply for more general types of driving fields H: They may include homogeneous [115] or heterogeneous delays [158, 159] (the latter one being of specific interest for coupled populations of neurons), they may be heterogeneous in terms of the contribution of individual oscillators [160], or they may include generalized mean fields [127]. However, note that much richer dynamics are possible when the assumptions of sinusoidal coupling breaks down. Because of the Poincaré–Bendixson theorem [61, 161], chaos is not possible for the mean-field reductions for \(M=1\) populations of Kuramoto–Sakaguchi oscillators since their effective dynamics is one- or two-dimensional, respectively. By contrast, even for fully symmetric networks, higher harmonics in the phase response curve/coupling function may lead to chaotic dynamics [132, 134].

Two oscillator populations

Two coupled populations of Kuramoto–Sakaguchi oscillators can give rise to a larger variety of synchrony patterns. Before considering general coupling between populations, we first discuss the widely investigated case of identical (and almost identical) populations of Kuramoto–Sakaguchi oscillators (4) with Lorentzian distribution of intrinsic frequencies. To be precise, we say that all populations of (4) are identical if for any two populations σ, τ, there is a permutation which sends σ to τ and leaves the corresponding equations (18) for the mean-field limit invariant. Intuitively speaking, this means we can swap any population with any other population without changing the dynamics. Mathematically speaking, the populations are identical if the Ott–Antonsen equations (18) have a permutational symmetry group that acts transitively [162]. Note that for the populations to be identical, the oscillators do not need to be identical. But if the populations are identical, then the frequency distributions \(h_{\sigma}\) are the same for all populations. Moreover, if the oscillators within each population have the same intrinsic frequency (as required for the Watanabe–Strogatz reduction) then all oscillators in the network have the same intrinsic frequency.

Oscillator networks which are organized into distinct populations support synchrony patterns which may be localized, that is, some populations show more (or less) synchrony than others. While this may not be surprising if the populations are nonidentical, such dynamics may also occur when the populations are identical. For identical populations of Kuramoto–Sakaguchi oscillators, the localized dynamics arise purely through the network interactions—the populations would behave identically if uncoupled—and hence constitute a form of dynamical symmetry breaking. The phenomenon of “coexisting coherence and incoherence” has been dubbed a chimera state in the literature [163] and has attracted a tremendous amount of attention in the last two decades; see [6365] for recent reviews. To date, an entire zoo of chimeras and chimera-like creatures has emerged in a range of different networked dynamical systems—with attempts to classify and distinguish these creatures [164, 165]—beyond the original context of phase oscillators [166]. Here we will discuss chimeras only in coupled populations of Kuramoto–Sakaguchi oscillators (4) as examples of localized patterns of (phase and frequency) synchrony.

Synchrony patterns for two identical populations

The Ott–Antonsen reduction has been instrumental to understand the dynamics of networks consisting of \(M=2\) populations of Kuramoto–Sakaguchi oscillators. Assuming that all intrinsic frequencies are distributed according to a Lorentzian, we obtain two coupled Ott–Antonsen equations (18) for the limit of infinitely large populations. In this section we focus on networks of identical populations, that is, the distributions of intrinsic frequencies are the same and coupling is symmetric; cf. Fig. 4(a). This allows one to simplify the parametrization of the system by introducing self-coupling\(c_{s} = k_{s} e^{-i\alpha_{s}} := c_{11} = c_{22}\) and neighbor-coupling\(c_{n} = k_{n} e^{-i\alpha_{n}} := c_{12} = c_{21}\) parameters and the coupling strength disparity \(A=(k_{s}-k_{n})/(k_{s}+k_{n})\). Writing \(Z_{\sigma}=R_{\sigma}e^{i\phi_{\sigma}}\) as above, the state of (18) is fully determined by the amount of synchrony in each population \(R_{1}\), \(R_{2}\) and the difference of the mean phase \(\psi :=\phi_{1}-\phi_{2}\) of the two populations; cf. [167]. Naturally, such networks support three homogeneous synchronized states, a fully synchronized state \(\textrm {S}\textrm {S}_{0}= \lbrace(R_{1},R_{2},\psi)=(1,1,0) \rbrace\) where both populations are synchronized and in phase, a cluster state \(\textrm {S}\textrm {S}_{\pi}= \lbrace(R_{1},R_{2},\psi)=(1,1,\pi) \rbrace\) where both populations are synchronized, and in anti-phase and a completely incoherent state \(\textrm {I}= \lbrace(R_{1},R_{2},\psi)=(0,0,*) \rbrace\). A bifurcation analysis shows that only one of the three is stable for any given choice of coupling parameters [167].

Figure 4

Synchrony patterns arise in networks of \(M=2\) populations. Panel (a) shows a cartoon of the network structure; the nodes of population \(\sigma=1\) are colored in red and the nodes of population \(\sigma=2\) in blue. The coupling within each populations (black edges) is determined by the coupling strength \(k_{s}\) and phase lag \(\alpha_{s}\) and between populations (gray edges) by \(k_{n}\) and \(\alpha_{n}\). Panel (b) shows various stable synchrony patterns in the network as phase snapshots of the solutions to Eqs. (4) with \(N=1000\) oscillators per population once the system has relaxed to an attractor; here S indicates a population that is (fully) phase synchronized (\(R_{\sigma}= 1\)) and D a nonsynchronized population (\(R_{\sigma}< 1\)). The parameters are \(\alpha_{s} = 1.58\) in the first three plots—here different initial conditions converge to different attractors—and \(\alpha_{s}=1.64\) in the rightmost. The parameters \(A=0.7\), \(\alpha _{n}=0.44\) were the same in all plots

In addition to homogeneous synchronized states, networks of two identical populations also support synchronization patterns where synchrony is localized in one of the two populations, a chimera, as illustrated in Fig. 4(b). As discussed by Abrams et al. [168], for homogeneous phase lags (\(\alpha_{s}=\alpha_{n}\)) stable complete synchrony SS0 and a stable chimera in \(\textrm {D}\textrm {S}= \lbrace R_{1}<1, R_{2}=1 \rbrace\), which is either stationary or oscillatory, coexist.Footnote 16 Note that the Ott–Antonsen reduction simplifies the analysis tremendously: It translates the problem for large oscillator networks into a low-dimensional bifurcation problem. Martens et al. [169] outlined the basins of attraction of the coexisting stable synchrony patterns and thereby answering the question as to which (macroscopic or microscopic) initial conditions converge to either state. Through directed perturbations it is possible to switch between different synchrony patterns and thus functional configurations of the network that are of relevance in neuroscience [32, 170], thus embodying memory states or controlling the predominant direction of information flow between subpopulations of oscillators [33]. Further work addresses the robustness of chimeras against various inhomogeneities, including heterogeneous frequencies [100, 171], network heterogeneity [172], and additive noise [171].

If one allows for heterogeneous phase-lag parameters, \(\alpha _{s}\neq\alpha_{n}\), a variety of other attractors with localized synchrony emerge [167, 173]. This includes in particular solutions in \(\textrm {D}\textrm {D}= \lbrace0< R_{1}< R_{2}, R_{2}<1 \rbrace\) where neither population is fully phase synchronized; cf. Fig. 4(b). This includes not only stationary or oscillatory solutions of the state variables, but also attractors where the order parameters \(Z_{1}\), \(Z_{2}\) fluctuate chaotically both in amplitude and with respect to their phase difference [174]. Finite networks with two populations of identical oscillators may be analyzed using the Watanabe–Strogatz equations (26a)–(26c). One finds that the bifurcation scenarios for the appearance of chimera states is similar to the dynamics observed for infinite populations [175]. Moreover, macroscopic chaos also appears in many finite networks [174] down to just two oscillators per population.

A note on finite networks of identical oscillators and localized frequency synchrony

For finite oscillator networks, the widely used intuitive definition of a chimera as a solution for networks of (almost) identical oscillators where “coherence and incoherence coexist” is difficult to apply in a mathematically rigorous way. Hence, Ashwin and Burylko [176] introduced the concept of a weak chimera which provides a mathematically testable definition of a chimera state in finite networks of identical oscillators; here, we only give an intuition and refer to [176, 177] for a precise definition. The main feature of a weak chimera is that identical oscillatory units (with the same intrinsic frequency if uncoupled) generate rhythms with two or more distinct frequencies solely through the network interactions—this is a fairly general form of synchronization. In the context of dynamical systems with symmetry [162], weak chimeras are, as outlined in [178], an example of dynamical symmetry breaking where identical elements have nonidentical dynamics since their frequencies are distinct.

More specifically, a weak chimera is characterized by localized frequency synchrony in a network of identical oscillators. Similar to the definition of identical populations further above, we say that the oscillators are identical if for a pair of oscillators \((\sigma,k)\) and \((\tau, j)\) there exists an invertible transformation of the oscillator indices which keeps the equations of motion invariant. In other words, all oscillators are effectively equivalent. Now \(\dot{\theta}_{\sigma,k}(t)\) is the instantaneous frequency of oscillator \((\sigma,k)\)—the change of phase at time t—and thus the asymptotic average frequency of oscillators \((\sigma,k)\) is

$$ \varOmega_{\sigma,k} = \lim_{T\to\infty} \frac{1}{T} \int _{0}^{T}\dot{\theta}_{\sigma,k}(t)\,\textrm {d}t. $$

Rather than looking at phase synchrony (\(\theta_{\sigma,k}=\theta _{\tau, j}\)) of oscillators \((\sigma,k)\) and \((\tau, j)\), we say that the oscillators are frequency synchronized if \(\varOmega _{\sigma,k}=\varOmega_{\tau, j}\). Weak chimeras now show localized frequency synchrony, that is, all oscillators within one population have the same frequency \(\varOmega_{\sigma}=\varOmega _{\sigma,k}\) while there are at least two distinct populations \(\tau \neq\tau'\) that have different frequencies, \(\varOmega_{\tau}\neq \varOmega_{\tau'}\). Note that weak chimeras are impossible for a globally coupled network of identical phase oscillators (that is, there is only a single population \(M=1\)): Such a network structure forces frequency synchrony of all oscillators [176].

Weak chimeras have been shown to exist in a range of networks which consist of \(M=2\) interacting populations of phase oscillators. For weakly interacting populations of phase oscillators with general interaction functions there can be stable weak chimeras with quasiperiodic [176, 179] and chaotic dynamics [177]. However, neither weak interaction nor general coupling functions are necessary for dynamics with localized frequency to arise: Even sinusoidally coupled networks (4) of just \(N=2\) oscillators per population support stable regular [175] and chaotic [174] weak chimeras.

Dynamics of nonidentical populations with distinct frequency distributions

As mentioned above, chimera states appear for two identical populations of phase oscillators. Using the Ott–Antonsen equations, Laing showed that these dynamics persist for (4) with \(M=2\) if \(\Delta_{\sigma}>0\) and \(\omega_{\sigma}\neq\omega _{\sigma'}\) in the large N limit [100]; see also [180] for further bifurcation analysis. As heterogeneity is increased, stationary chimera states can become oscillatory through Hopf bifurcations and may eventually be entirely destroyed.

Montbrió et al. [181] studied two populations where not only frequencies were nonidentical (\(\Delta _{\sigma}>0\), \(\varOmega_{\sigma}\neq\varOmega_{\sigma'}\)), but also the coupling was asymmetric between the two populations. In another study, Laing et al. considered noncomplete networks to study the sensitivity of chimera states against gradual removal of random links starting from a complete network [172], and found that oscillations of chimera states can be either created or suppressed depending on the type of link removal.

Dynamics of nonidentical populations with asymmetric input or output

Another way to break symmetry in a population of Kuramoto oscillators is inspired by neural networks with excitatory and inhibitory coupling [56]: One replaces K with a random coefficient \(K_{j}\)inside the sum in (2). Thus, oscillators with \(K_{j} >0\) mimic the behavior of excitatory neurons while those with \(K_{j}<0\) correspond to inhibitory neurons. The interactions between oscillators j and l are not necessarily symmetric, unless \(K_{j} = K_{l}\). The study by Hong and Strogatz [182] reveals that—somewhat surprisingly—extending the Kuramoto model in this fashion yields dynamics that resembles that of the original model (2) when the intrinsic frequencies \(\omega_{k}\) are nonidentical. Similar coupling schemes accommodating for excitatory and inhibitory coupling have been devised for multi-population models (5), to study how solitary states emerge within a synchronized population, thus leading to the formation of clusters [183].

Another possibility to include coupling heterogeneity considered by Hong and Strogatz is to introduce an oscillator dependent coupling parameter \(K_{k}\)outside of the sum in Eq. (2); see [184]. This relates to social behavior: An oscillator k is conformist if \(K_{k}>0\) (it wants to synchronize) and contrarian if \(K_{k}<0\). This setup may give rise to complex states where oscillators bunch up in groups with a phase difference of π or move like a traveling wave. A later study found that the system with identical oscillators harbors even more complex dynamics, such as incoherent and other states [185].

Three and more oscillator populations

Stable synchrony patterns for three identical populations

We first consider identical populations with reciprocal coupling in the sense that \(c_{\sigma\tau}=c_{\tau\sigma}\); see [186, 187]. Here the coupling is determined by self-coupling \(k_{s}\) and phase lag \(\alpha _{s}\), as well as coupling strength and phase lag to the neighboring populations \(k_{n_{1}}\), \(k_{n_{2}}\), \(k_{n_{3}}\) and \(\alpha_{n_{1}}\), \(\alpha_{n_{2}}\), \(\alpha_{n_{3}}\). Reducing the phase-shift symmetry, the state of the system is determined by the magnitude of the order parameters, \(R_{\sigma}= \vert Z_{\sigma} \vert \) and the phase differences between the mean fields \(\psi_{1}=\phi_{2}-\phi_{1}\) and \(\psi_{2}=\phi_{3}-\phi_{1}\).

Networks of three populations support a variety of localized synchrony patterns. For coupling with a triangular symmetry, that is, \(k_{n_{1}}=k_{n_{2}}=k_{n_{3}}\leq k_{s}\) and \(\alpha_{n_{1}}=\alpha _{n_{2}}=\alpha_{n_{3}}=\alpha_{s}\), Martens [186] identified coexisting synchrony patterns: There are three stable solution branches, full phase synchrony \(\textrm {S}\textrm {S}\textrm {S}= \lbrace R_{1}=R_{2}=R_{3}=1 \rbrace\) as well as two chimeras in \(\textrm {S}\textrm {D}\textrm {S}= \lbrace R_{1}=R_{3}=1>R_{2} \rbrace\) and in \(\textrm {D}\textrm {S}\textrm {D}= \lbrace R_{1}=R_{3}< R_{2}=1 \rbrace\). The Ott–Antonsen reduction allows one to perform an explicit bifurcation analysis of the resulting planar system and shows bifurcations similar to networks with \(M=2\) populations. Remarkably, there are parameter values where \(\textrm {S}\textrm {S}\textrm {S}\) as well as the chimeras in \(\textrm {S}\textrm {D}\textrm {S}\), \(\textrm {D}\textrm {S}\textrm {D}\) are stable simultaneously; this gives rise to the possibility of switching between these three synchronization patterns through directed perturbations [169]. This triangular symmetry is broken in [187] by allowing \(k_{n_{2}}\neq k_{n_{1}}\). Thus, the coupling between populations 2 and 3 can be gradually reduced or increased until the network effectively becomes a chain of three populations or effectively two populations, respectively. A bifurcation analysis shows that the chimeras in \(\textrm {S}\textrm {D}\textrm {S}\) and \(\textrm {D}\textrm {S}\textrm {D}\) persist and provides stability boundaries.

Metastability and dynamics of localized synchrony for identical oscillators

The synchrony patterns above were primarily considered as attractors: For a range of initial phase configurations, the long term dynamics of the oscillator network will exhibit a particular synchrony pattern. While this may be a good approximation for large scale neural dynamics on a short time-scale, the global dynamics of large-scale brain neural networks are usually much more complicated [26]. Neural recordings show that particular dynamical states (of synchrony and activity) persist for some time before a rapid transition to another state [53, 188, 189]. One approach to model such dynamics is to assume that there are a number of metastable states (rather than attractors) in the network phase space which are connected dynamically by heteroclinic trajectoriesFootnote 17 [190]. If heteroclinic trajectories form a heteroclinic networkFootnote 18—the nodes of this network are dynamical states, links are connecting heteroclinic trajectories—the system can exhibit sequential switching dynamics: The state will stay close to one metastable state before a rapid transition, or switch, to the next dynamical state. Heteroclinic networks have long been subject to investigations, both theoretically [191] and with respect to applications in neuroscience [43]; one possible modeling approach is to write down kinetic (Lotka–Volterra type) equations for interacting macroscopic activity patterns [192, 193] which support heteroclinic networks.

Heteroclinic dynamics also arise in phase oscillator networks. For globally coupled oscillator networks, i.e., \(M=1\) population, there are heteroclinic networks between patterns of phase synchrony [194, 195]. As mentioned above, all oscillators in these networks are necessarily frequency synchronized, that is, they show the same rate of activity. More recently, it was shown that more general network interactions than those in (4) allow for heteroclinic switching between weak chimeras as states with localized frequency synchrony [196]: Each population will sequentially switch between states with high activity (frequency) to a state with low activity. One of the simplest phase oscillator networks which exhibits such dynamics consists of \(M=3\) populations of \(N=2\) oscillators where \(K>0\) mediates the coupling strength between populations. More precisely, the dynamics of oscillator \((\sigma, k)\) are given by

$$ \begin{aligned}[b] \dot{\theta}_{\sigma,k} &= \sin( \theta_{\sigma,3-k}-\theta_{\sigma ,k}+\alpha)+r\sin\bigl(2( \theta_{\sigma,3-k}-\theta_{\sigma,k}+\alpha)\bigr) \\ &\quad -K\cos(\theta_{\sigma-1,1}-\theta_{\sigma-1,2}+ \theta_{\sigma ,3-k}-\theta_{\sigma,k}+\alpha) \\ &\quad -K\cos(\theta_{\sigma-1,2}-\theta_{\sigma-1,1}+ \theta_{\sigma ,3-k}-\theta_{\sigma,k}+\alpha) \\ &\quad +K\cos(\theta_{\sigma+1,1}-\theta_{\sigma+1,2}+ \theta_{\sigma ,3-k}-\theta_{\sigma,k}+\alpha) \\ &\quad +K\cos(\theta_{\sigma+1,2}-\theta_{\sigma+1,1}+ \theta_{\sigma ,3-k}-\theta_{\sigma,k}+\alpha). \end{aligned} $$

Here the interactions within each population is not just given by a first harmonic as in (4) but also by a second harmonic (scaled by a parameter r); this is sometimes referred to as Hansel–Mato–Meunier coupling [194]. Moreover, the interactions between populations are not additive but consist of nonlinear functions of four phase variables; this is a concrete example of higher-order interaction terms discussed above. It remains an open question whether such generalized interactions are necessary to generate heteroclinic dynamics between weak chimeras.

Dynamics of metastable states with localized (frequency) synchrony are of interest also in larger networks of \(M>3\) populations. Since explicit analytical results are hard to get for such networks, Shanahan [197] used numerical measures to analyze how metastable and “chimera-like” the network dynamics are. Recall that \(R_{\sigma}(t)\) encodes the level of synchrony of population σ at time t. Let \(\langle\cdot \rangle_{\sigma}\), \(\operatorname {Var}_{\sigma}\) denote the mean and variance over all populations \(\sigma=1, \dotsc, M\) and \(\langle\cdot \rangle_{T}\), \(\operatorname {Var}_{T}\) mean and variance over the time interval \([0, T]\). Now

$$\lambda= \bigl\langle \operatorname {Var}_{T}\bigl(R_{\sigma}(t)\bigr) \bigr\rangle _{\sigma}$$

gives how much synchrony of individual populations vary over time while

$$\chi= \bigl\langle \operatorname {Var}_{\sigma}\bigl(R_{\sigma}(t)\bigr) \bigr\rangle _{T} $$

encodes how much synchrony varies across populations. Intuitively, large values of λ correspond to a high level of “metastability” while large values of χ indicate that the dynamics are “chimera-like”. On the one hand, these measures have subsequently been applied to more general oscillator networks [198, 199]. On the other hand, they have been applied to study the effect of changes to the network structure (for example through lesions) to the dynamics of Kuramoto–Sakaguchi oscillators (4) with delay on human connectome data [200].

Populations with distinct intrinsic frequencies

Mean-field reductions have also been successful at describing networks of nonidentical populations with distinct mean intrinsic frequencies. Examples of such a setup include interacting neuron populations in the brain with distinct characteristic rhythms. Resonances between the mean intrinsic frequencies give rise to higher-order interactions. Subject to certain conditions, one can apply the Ott–Antonsen reduction for the mean-field limit [201] or the Watanabe–Strogatz reduction for finite networks [202] to understand the collective dynamics. If resonances between the mean intrinsic frequencies of the populations are absent [203], then the mean-field limit equations (OA)—a system with 2M real dimensions—simplify even further. More specifically, assume that the intrinsic frequencies are distributed according to a Lorentzian distribution with width \(\Delta_{\sigma}\) and write \(Z_{\sigma}= R_{\sigma}e^{i\phi_{\sigma}}\) for the Kuramoto order parameter as above. As outlined in [203], nonresonant interactions imply that—as in (28a)—the equations for \(R_{\sigma}\) in (OA) decouple from the dynamics of the mean phases \(\phi_{\sigma}\). That is, the macroscopic dynamics are described by the M-dimensional system of equations

$$ \dot{R}_{\sigma}= \Biggl(-\Delta_{\sigma}- \sum_{\tau=1}^{M}b_{\sigma\tau}R_{\tau}+ \bigl(1-R_{\sigma}^{2} \bigr) \Biggl(a_{\sigma}+ \sum_{\tau=1}^{M}c_{\sigma\tau}R_{\tau}\Biggr) \Biggr)R_{\sigma}, $$

where \(a_{\sigma}, b_{\sigma\tau}, c_{\sigma\tau}\in \mathbb {R}\) are parameters which depend on the underlying nonlinear oscillator system. Note that these equations of motion are similar to Lotka–Volterra type dynamical systems which have been used to model sequential dynamics in neuroscience [192, 193]. Indeed, (31) give rise to a range of dynamical behavior including sequential localized synchronization and desynchronization through cycles of heteroclinic trajectories and chaotic dynamics [203].

Networks of neuronal oscillators

Neurons can be modeled at different levels of realism and complexity [204]. The approach we (and many others) take is to ignore the spatial extent of individual neurons (including dendrites, soma, and axons) and treat each neuron as a single point whose state is described by a small number of variables such as intracellular voltage and the concentrations of certain ions. We also ignore stochastic effects and describe the dynamics of single neurons by a small number of ordinary differential equations. By definition, the state of a Theta neuron or a QIF neuron is described by a phase variable. However, under the assumption of weak coupling, higher-dimensional models with a stable limit cycle (e.g., Hodgkin–Huxley, FitzHugh–Nagumo) can be reduced to a phase description using phase reduction [43, 44].

The two main types of coupling between neurons are through synapses or gap junctions. In synaptic coupling, the firing of a presynaptic neuron causes a change in the membrane conductance of the postsynaptic neuron, mediated by the release of neurotransmitters. This has the effect of causing a current to flow into the postsynaptic neuron, the current being of the form

$$ I(t)=\mathrm {g}(t) \bigl(V^{\text{rev}}-V\bigr), $$

where \(V^{\text{rev}}\) is the reversal potential for that synapse, V is the voltage of the postsynaptic neuron, and \(\mathrm {g}(t)\) is the time-dependent conductance. The sign of \(V^{\text{rev}}\) relative to the resting potential of the postsynaptic neuron governs whether the synapse is excitatory or inhibitory. The function \(\mathrm {g}(t)\) may be stereotypical, i.e., it may have the same functional form for each firing of the presynaptic neuron, where t is measured from the last firing, or it may have its own dynamics. One approximation in this type of modeling is to ignore the value of V in (32) and just assume that the firing of a presynaptic neuron causes a pulse of current to be injected into the postsynaptic neuron(s).

In gap junctional coupling a current flows that is proportional to voltage differences, so if neurons k and j have voltages \(V_{k}\) and \(V_{j}\), respectively, and g is the (constant) gap junction conductance, the current flowing from neuron k to neuron j is \(I=\mathrm {g}(V_{k}-V_{j})\).

Populations of Theta neurons

In this section, we consider a population of Theta neurons (7) where the network interactions are generated by the input from all other neurons in the network. For input through synapses, for example, each neuron receives signals from the rest of the network through the input current I. Here, we will focus on the Ott–Antonsen reduction for Theta neurons (19) in the mean-field limit, assuming that variations in excitability are distributed according to a Lorentzian. The key ingredient here is to write the network input in terms of the mean-field variables to obtain a closed system of mean-field equations; as we will see below, this is possible for a range of couplings that are relevant for neural dynamics. For now, we focus on one population and omit the population index σ.

In the following, we consider a network where each neuron emits a pulse-like signal of the form

$$\begin{aligned} P_{n}(\theta) &= a_{n}(1-\cos{ \theta})^{n} \end{aligned}$$

as it fires (the phase θ increases through π, see Figs. 5 and 2). The parameter \(n\in \mathbb {N}\) determines the sharpness of a pulse and \(a_{n} = 2^{n} (n!)^{2}/(2n)!\) is the normalization constant such that \(\int_{0}^{2\pi }P_{n}(\theta)\,\textrm {d}\theta=2\pi\); cf. Fig. 5. The average output of all neurons in the network, each one contributing identically, is

$$ P^{(n)}=\frac{1}{N}\sum_{j=1}^{N}P_{n}(\theta_{j}). $$

Now \(P^{(n)}\) can be expressed as a function of the order parameter Z: As shown in [93, 205, 206] we have for the mean-field limit of infinitely many neurons, \(N\to \infty\),

$$\begin{aligned} P^{(n)}&=a_{n} \Biggl(C_{0}+ \sum_{q=1}^{n}C_{q}\bigl( Z^{q}+\bar{Z}^{q}\bigr) \Biggr) \end{aligned}$$

with coefficients

$$\begin{aligned} C_{q}&=\sum_{k=0}^{n}\sum _{m=0}^{k}\frac{n!(-1)^{k}\delta _{k-2m,q}}{2^{k}(n-k)!m!(k-m)!}. \end{aligned}$$

Here \(\delta_{p,q}=1\) if \(p=q\) and \(\delta_{p,q}=0\) otherwise. In the limit of infinitely narrow pulses, \(n\rightarrow\infty\), we find

$$ P^{\infty}=\frac{1- \vert Z\vert ^{2}}{1+Z+\bar{Z}+ \vert Z\vert ^{2}}. $$
Figure 5

The function \(P_{n}(\theta)\) is a pulse centered at \(\theta =\pi\), the phase where a neuron spikes; here \(P_{n}\) is plotted for \(n=1, \dotsc, 9\). As n increases, the pulse becomes narrower

Synaptic coupling

If each Theta neuron (7) receives instantaneous synaptic input in the form of current pulses as in [93, 94, 205], the input current to each neuron is the network output

$$ I(t) = P^{(n)}(t). $$

A positive coupling strength \(\kappa>0\) for the Theta neuron (7) corresponds to excitatory coupling and \(\kappa<0\) to inhibitory coupling. Note that since I now is a function of the Kuramoto order parameter by (35), we have closed the Ott–Antonsen equation for the Theta neuron (19) to obtain a system describing the dynamics for infinitely many oscillators.

Example 2

The challenge in Problem 2 was to classify what dynamics are possible in a single population of globally coupled Theta neurons with pulsatile coupling and specify the onset where neurons start to fire. The dynamical repertoire of a population of Theta neurons can be understood using the Ott–Antonsen equations for the limit \(N\rightarrow\infty\). We follow the work by Luke et al. [93] who considered a network with pulsatile coupling (33) with nontrivial width \(n=2\) and direct synaptic coupling. According to (35) the pulse shape evaluates to

$$ P^{(2)}(Z) = 1 + \frac{1}{6}Z^{2} + \frac{1}{6}\bar{Z}^{2} - \frac {4}{3}\operatorname {Re}(Z) $$

as a function of Z. With direct synaptic coupling \(I=\kappa P^{(n)}\), the Ott–Antonsen equations (19) for an infinitely large population is thus given by

$$\begin{aligned} \dot{Z} &= -\frac{1}{2} \biggl( \biggl(\Delta-i\eta- i \kappa \biggl(1 + \frac{1}{6}Z^{2} + \frac{1}{6} \bar{Z}^{2} - \frac{4}{3}\operatorname {Re}(Z) \biggr) \biggr) (1+Z)^{2} +i(1-Z)^{2} \biggr). \end{aligned}$$

This closed, two-dimensional set of equations can readily be analyzed using dynamical systems methods.

Different dynamic behaviors may be observed for (19) while varying up to three parameters: the coupling strength κ, excitability threshold η̂, and the width of their distribution Δ. Luke et al. [93] found three distinct stable dynamical regimes: (i) partially synchronous rest, (ii) partially synchronous spiking, and (iii) collective periodic wave dynamics. In partially synchronous rest, most neurons remain quiescent (a stable node in the two-dimensional Ott–Antonsen equations (19) for Z); in the partially synchronous spiking regime most neurons spike continuously (a stable focus for Z); and in the collective periodic wave neurons fire periodically (a stable periodic orbit of the order parameter Z). Varying κ from small to large values, we typically observe a transition from partially synchronous spiking (quiescence) to partially synchronous spiking, with growing synchrony as κ increases. This transition is characterized by hysteresis arising around two fold bifurcations, originating in a cusp bifurcation. For certain parameter values, the order parameter may undergo a Hopf bifurcation from partially synchronous spiking to collective periodic wave dynamics so that \(Z(t)\) becomes oscillatory.

In a neuroscientific context, knowledge of macroscopic quantities other than the order parameter Z is sometimes preferable, such as the firing rate given via (21) as \(r=\frac{1}{\pi} \operatorname {Re}((1-\bar{Z})/(1+\bar{Z}) )\). Alternatively, as outlined in Sect. 3.1.3, the macroscopic equation (23) for \(W(t)\) is equivalent to (19) via a conformal transformation and describes the evolution of the population’s firing rate \(r=\frac{1}{\pi} \operatorname {Re}(W )\) and average voltage \(V=\operatorname {Im}(W )\). The algebraic solution for stationary states is particularly simple if one chooses infinitely narrow pulse shape (\(n\rightarrow\infty\)); however, note that this choice may be biophysically less realistic [93] and renders more degenerate dynamic behavior, e.g., bifurcations giving rise to oscillations in the order parameter (firing rate) disappear in this particular network with \(M=1\) population.

Finally, we note that if in addition the excitability of neurons varies periodically, more complicated dynamics and macroscopic chaos can be observed [205]. While this example covers networks of Theta neurons, the same approach applies to networks with QIF neurons with direct synaptic coupling as given by (38); see, for example, the analyses in [207209].

A simple modification of (7) is to add synaptic dynamics by letting the input current I satisfy the equation

$$ \tau _{\mathrm {syn}}\dot{I}=P^{(n)}-I, $$

where \(\tau _{\mathrm {syn}}\) is the time-constant governing the synaptic dynamics. In the limit \(\tau _{\mathrm {syn}}\to0\) the synaptic dynamics are instantaneous and we recover the previous model. Again, with (35) the Ott–Antonsen equations (19) and (39) form a closed system of equations that describe the dynamics in the mean-field limit.

Gap junctions

Along with synaptic coupling, the other major form of coupling between neurons is via gap junctions [210], in which a current flows between connected neurons proportional to the difference in their voltages. Using the equivalence of the Theta and QIF neuron, it was shown in [211] that adding all-to-all gap junction coupling to (7) results in the equations

$$ \dot{\theta}_{k}=1-\cos{\theta_{k}}-\kappa^{\text{GJ}} \sin{\theta _{k}}+(1+\cos{\theta_{k}}) \Biggl( \eta_{k}+\kappa I+\frac{\kappa^{\text{GJ}}}{N}\sum _{j=1}^{N}\mathrm {tn}(\theta_{j}) \Biggr) , $$

where \(\kappa^{\text{GJ}}\) is the strength of gap junction coupling and the function \(\mathrm {tn}(\theta):=\sin{\theta}/ (1+\cos{\theta }+\epsilon)\) with \(0<\epsilon\ll1\) stems from the coordinate transformation between Theta and QIF neurons. Note that (40) is still a sinusoidally coupled system. Assuming a Lorentzian distribution of excitability \(\eta_{k}\) centered at η̂ with width Δ, the dynamics in the limit of infinitely many oscillators are given by the Ott–Antonsen equation,

$$ \dot{Z}=\frac{1}{2} \bigl((i\hat {\eta }-\Delta) (1+Z)^{2}-i(1- Z)^{2} \bigr)+\frac{1}{2} \bigl(i(1+Z)^{2} \bigl(\kappa I+\kappa^{\text{GJ}}Q\bigr)+\kappa^{\text{GJ}}\bigl(1- Z^{2}\bigr) \bigr) , $$


$$\begin{aligned} Q&=\sum_{m=1}^{\infty}\bigl(b_{m} Z^{m}+\bar{b}_{m} \bar{Z}^{m} \bigr), \quad b_{m}=\frac{i(\rho^{m+1}-\rho^{m-1})}{2\sqrt{2\epsilon+\epsilon^{2}}}, \end{aligned}$$

and \(\rho=\sqrt{2\epsilon+\epsilon^{2}}-1-\epsilon\). Note that the input current is still to be defined: There could be gap junction only coupling, \(I=0\), instantaneous synaptic input (38) or synaptic dynamics (39) as defined above.

The reduced equations allow one, for example, to study what effect the strength of the gap junction coupling has on the dynamics. Laing [211] found that for excitatory synaptic coupling (i.e., \(\kappa >0\)) increasing the strength of gap junction coupling could induce oscillations in the mean field via a Hopf bifurcation, and destroy previously existing bistability between steady states with high and low mean firing rates. For inhibitory synaptic coupling (i.e., \(\kappa<0\)) increasing the strength of gap junction coupling stabilized a steady state with high mean firing rate, inducing bistability in the network. In spatially extended systems, it was found that gap junction coupling could destabilize “bump” states via a Hopf bifurcation, and create traveling waves of activity.

Note that in recent work [212] the authors showed that one can take the limit \(\epsilon\rightarrow0\) in the above derivation, thus simplifying the analysis and allowing one to treat synaptic and gap junctional coupling (in an infinite network of QIF neurons) on equal footing.

Conductance dynamics

The above models for Theta neurons have all assumed that synaptic coupling is via the injection of current pulses. However, Ref. [60] considers a model in which synaptic input was in the form of a current, equal to the product of a conductance and the difference between the voltage of a QIF neuron and a reversal potential \(V^{\text{rev}}\). Converting to Theta neuron variables, a particular case of their model can be written as

$$ \dot{\theta}_{k} = 1-\cos{\theta_{k}}+(1+\cos{ \theta_{k}}) \bigl(\eta _{k}+\mathrm {g}(t)V^{\text{rev}} \bigr)-\mathrm {g}(t)\sin{\theta_{k}} $$

with a time-dependent gating function

$$ \mathrm {g}(t)=\kappa^{\mathrm {g}}P^{\infty}(t) $$

that depends on the network output modulated by the coupling strength \(\kappa^{\mathrm {g}}>0\). (Note that quantities like g and \(V^{\text{rev}}\) have been non-dimensionalized by scaling relative to dimensional quantities.) The corresponding Ott–Antonsen equations read

$$ \dot{Z}=\frac{1}{2} \bigl((i\hat {\eta }-\Delta) (1+Z)^{2}-i(1- Z)^{2} \bigr)+\frac{1}{2} \bigl(i(1+Z)^{2} \mathrm {g}V^{\text{rev}}+\bigl(1-Z^{2}\bigr)\mathrm {g}\bigr) $$

which is closed since \(\mathrm {g}(t)\) is a function of Z by (37).

The dynamics of this network are straightforward and as expected: For inhibitory coupling (\(V^{\text{rev}}<0\)) there is one stable fixed point for all η̂ while for excitatory coupling (\(V^{\text{rev}}>0\)) there can be a range of negative η̂ values for which the network is bistable between steady states with high and low average firing rates. This bistability in an excitatorially self-coupled network is of interest as such a network can be thought of as a one-bit “memory”, stably storing one of two states.

Populations of Winfree oscillators

The state of a Winfree oscillator [79] is also described by a single angular variable. The Winfree model predates the Kuramoto model and mimics the behavior of biological systems such as flashing fireflies or circadian rhythms in Drosophila [213]. In general, the Winfree model does not exhibit sinusoidal coupling. But under suitable assumptions, a network of Winfree oscillators is amenable to simplification through the Ott–Antonsen reduction [214]. Consider a network of N Winfree phase oscillators which evolve according to

$$ \dot{\theta}_{k}=\omega_{k}+\frac{\epsilon}{N}\sum _{j=1}^{N}\hat {P}(\theta_{j})Q( \theta_{k}) $$

for \(k=1, \dotsc, N\) and 2π-periodic functions Q and . The function Q is the phase response curve of an oscillator, which can be measured experimentally or determined from a model neuron [215]. If we set \(Q(\theta)=\sin{\beta }-\sin{(\theta+\beta)}\) with parameter β then we have a sinusoidally coupled phase oscillator network. Moreover, suppose that network interaction is given by a pulsatile function \(\hat{P}(\theta )=P_{n}(\theta-\pi)\). While  has its maximum at \(\theta=0\) (unlike the interactions for the Theta neuron), it can be expanded in a similar way as (35) into powers of the Kuramoto order parameter. Assuming that the intrinsic frequencies are distributed as a Lorentzian, we obtain an Ott–Antonsen equation that describes the dynamics in the limit of infinitely large networks; see [214] for details.

Several groups have used this description to study the dynamics of infinite networks of Winfree oscillators. Pazó and Montbrió [214] found that such a network typically has either an asynchronous state (constant mean field) or a synchronous state (periodic oscillations in the mean field, indicating partial synchrony within the network) as attractors. They also found that varying n (the sharpness of \(P_{n}\)) had a significant effect on the synchronizability of the network. Laing [206] studied a spatially-extended network of Winfree oscillators and found a variety of stationary, traveling, and chaotic spatiotemporal patterns. Finally, Gallego et al. [216] extended the work in [214], considering a variety of types of pulsatile functions and phase response curves.

Coupled populations of neurons

While the previous sections discussed a network consisting of a single population of all-to-all coupled model neurons, an obvious generalization is to consider networks of two or more populations. Consider M populations of Theta neurons and let \(P^{(n)}_{\tau}\) denote the output of population τ. For example for synaptic interaction amongst populations, (38) generalizes to

$$ I_{\sigma}(t) = \sum_{\tau=1}^{M}\kappa_{\sigma\tau }P^{(n)}_{\tau}(t), $$

where \(\kappa_{\sigma\tau}\) is the input strength from population τ to population σ. Writing each \(P_{\tau}\) in terms of the order parameter \(Z_{\tau}\) of population τ, we obtain a closed set of M Ott–Antonsen equations (19) that describe the dynamics for infinitely large populations.

Interacting populations of neural oscillators give rise to neural rhythms. Laing [206] considered a network of two coupled populations of Theta neurons, one inhibitory and one excitatory. Such networks support a periodic PING rhythm [56] in which the activity of both populations is periodic, with the peak activity of the excitatory population activity preceding that of the inhibitory one. Analyses of similar types of networks were performed in [52, 60, 102]. Periodic behavior of the mean-field equations of coupled populations of Theta neurons (or equivalently QIF neurons) allows one to extract macroscopic phase response curves [217] which allows one to treat such ensembles as single oscillatory units in weakly coupled networks.

Coupled populations of Winfree oscillators support a range of dynamics. In Ref. [214] the authors considered a symmetric pair of networks of Winfree oscillators. They observed a variety of dynamics such a quasiperiodic chimera state in which one population is perfectly synchronous while the order parameter of the other undergoes quasiperiodic oscillations. They also found a chaotic chimera state where one population is phase synchronized while the order parameter of the other one fluctuates chaotically.

Further generalizations

The oscillator populations considered above do not have any sense of space themselves, apart from possibly two networks being at different points in space. The brain is three-dimensional, although the presence of layered structures could lend itself to a description in terms of a series of coupled two-dimensional domains. Regardless, the spatial aspects of neural dynamics should not be ignored. Several authors have generalized the techniques discussed above to spatial domains, deriving neural field models: spatiotemporal evolution equations for macroscopic quantities [94, 206, 211, 218221]. The main advantage of using this new generation of neural field models is that unlike classical models [58, 59], the derivations from networks of Theta neurons are exact rather than heuristic. Rather than considering neural field models on continuous spatial domains, one could consider them on a discretized network where each node is a brain region and coupling strength are given, for example, by connectome data. We will briefly touch upon these approaches in Sect. 5 below.

All of the networks above have been all-to-all coupled which is rarely the case in real-world systems. The in-degree of a neuron is the number of neurons connecting to it, whereas the out-degree is the number of neurons to which it connects. For all-to-all coupled networks all neurons have the same in- and out-degree (\(N-1\) for a network of N neurons with no self-coupling). Several groups have considered networks in which the degrees are distributed, having a power law distribution, for example [172, 222224]. The mean-field reduction techniques discussed above can be used to accurately and efficiently investigate the influence of this aspect of network structure on dynamics, and this is of great interest.

Networks of identical oscillators (whether finite or infinite) are described by the Watanabe–Strogatz equations. While the application to Kuramoto-type oscillator networks is fairly standard, the corresponding mean-field equations for Theta neurons (27a)–(27c) have only recently been analyzed.

Applications to neural modeling

The mean-field reductions and their applications to populations of neural units—as next-generation neural mass models—can give new modeling approaches to understand the dynamics of large-scale neural networks. In the previous section, we took a descriptive dynamical systems perspective to understand the asymptotic dynamics and their bifurcations. We now change the perspective to elucidate how the mean-field reductions can give new insights into neural network dynamics.

Dynamics of neural circuits and populations

Example 3

How does a heterogeneous network of all-to-all coupled QIF neurons react to a transient stimulus (see Problem 3 above)? To answer this question using exact mean-field reductions, we analyze a situation similar to that studied by Montbrió et al. [102]: Consider a network of QIF neurons (9) with dynamics governed by

$$ \dot{V}_{k}=V_{k}^{2}+I_{k}+ \kappa r(t)+s(t) $$

for \(k=1,\dotsc, N\) with the rule that when the voltage \(V_{k}=\infty\), it is reset to \(V_{k}=-\infty\). The \(I_{k}\) are chosen from a Lorentzian distribution with mean η̂ and width parameter Δ, and neurons are coupled all-to-all with coupling strength κ. The mean firing rate is given by

$$r(t)=\frac{1}{N}\sum_{j=1}^{N} \sum_{\ell}\delta\bigl(t-t_{j}^{\ell}\bigr), $$

representing the average neural activity in the past, i.e., \(t_{j}^{\ell}\) is the firing time of the jth neuron and  is summed only over past firing times, \(t_{j}^{\ell}< t\). The input current \(s(t)\) will be specified below. Letting \(N\rightarrow\infty\) the network is described by (23), which in the present case becomes

$$ \dot{W}=i\hat {\eta }+\Delta-iW^{2}+i \bigl(\kappa r(t)+s(t) \bigr) , $$

where \(r=\operatorname {Re}(W )/\pi\). Suppose we set \(\Delta =0.1\), \(\hat {\eta }=-0.5\) and \(\kappa=5\). Having \(\hat {\eta }<0\) means that in the absence of coupling most neurons will be excitable rather than firing, and \(\kappa>0\) models excitatory coupling.

We set the transient input to be \(s(t)=0.3\) for \(50< t<150\) and \(s(t)=0\) otherwise. The mean firing rate and voltage (i.e., averages over the ensemble of all neurons) of the network are shown in Fig. 6 (top and bottom, respectively) for both the network (48) and the mean field description (49). For these parameters the network is bistable: After the input current is removed, the network settles into an active state rather than returning to the quiescent state that it was in before stimulation. The agreement between the two descriptions is excellent, but the mean-field description is obviously much easier to numerically integrate and is also amenable to bifurcation analysis, as for example shown in [102].

Figure 6

Response of the network (48) (blue lines) and the mean field description (49) (red lines) to a transient input turned on at \(t=50\) and off at \(t=150\) (shaded background). A network of 1000 neurons was used and the data from the network was smoothed by convolution with a Gaussian of standard deviation 0.05 time units before plotting

The influence of oscillatory drive on network dynamics related to cognitive processing in simple working memory and memory recall tasks was studied by Schmidt et al. [225] in coupled populations of inhibitory and excitatory QIF neurons. The authors use the exact mean-field reductions reviewed here to elucidate how oscillatory input frequency stimulates the intrinsic dynamics in networks of recurrently coupled spiking neurons to change memory states. They find that slow delta and theta band oscillations are effective in activating network states associated with memory recall, while faster beta oscillations can serve to clear memory states via resonance mechanisms.

Balanced sparse networks of inhibitory QIF neurons were studied by di Volo and Torcini [226] to explain the onset of self-sustained collective oscillations via reduction to mean-field dynamics. This is achieved by applying the mean field reductions to sparse networks with diverging coupling strength, an approximation which works surprisingly well as their bifurcation diagrams show the onset of collective oscillations. The application of the mean-field reductions to sparse networks is ad hoc and further mathematical insights to define a well-defined limit would be desirable.

The work by Dumont and Gutkin [227] used exact phase reductions to identify the biophysical neuronal and synaptic properties that are responsible for macroscopic dynamics, such as the interneuronal gamma (ING) and the pyramidal-interneuronal gamma (PING) population rhythms. The key ingredient is the phase response curve of oscillatory macroscopic behavior of two coupled populations of QIF neurons [217], one excitatory and one inhibitory, as mentioned above. Assuming weak coupling between two sets of two populations (i.e., four populations total) the authors extracted phase locking patterns of the coupled multipopulation model.

A number of other studies have employed mean-field reductions for populations of QIF neurons to elucidate how microscopic neural properties affect the macroscopic dynamics [228, 229]. This includes insights into networks of heterogeneous QIF neurons with time delayed, all-to-all synaptic coupling [230, 231], or two such networks [232]. Moreover, the mean-field reductions are also useful to analyze spatially extended networks of both Theta and QIF neurons, where localized patterns—such as bump states—can occur; cf. [94, 220].

Large-scale neural dynamics

The theory above is particularly pertinent for the study of mesoscopic or macroscopic brain dynamics, i.e., dynamics arising from tissue that contains large populations of neurons. Such dynamics are recorded using a variety of different modalities in animal or human studies, including local field potentials (LFP) and magneto- or electroencephapholographic (MEG/EEG) recordings [19]. These recording modalities pick up changes in dynamics that arise in conjunction with fluctuations in populations of neurons. Thus, when recordings are taken from multiple sensors in different positions simultaneously, one can map the spatiotemporal dynamics of large regions of the brain. The inclusion of multiple sensors yields a natural way to construct a large-scale network representation of the dynamics of the brain, in which sensors are nodes of the network. Alternatively, dynamics can be attributed to distributed regions of interest within the brain, for example using approaches to solve the inverse problem and thereby reconstruct a network in source space [233, 234].

Having defined nodes, to determine interactions [235] there are several ways to define the edges of large-scale brain networks; in a general context this inverse problem is known as network reconstruction [236]. Broadly speaking, edges of brain networks can be characterized as either functional, structural, or effective connections [19, 237]. In the former, a measure of statistical interrelation is used to quantify the extent that the dynamics of nodes co-evolve (see, for example, [238]), with edges linking pairs of nodes that are highly correlated being assigned large weights. Structural connectivity, on the other hand, describes a means to define edges on anatomical grounds, for example via tracing of axonal tracts [239]. Finally, edges in effective connectivity networks are defined as connection strengths in explicit dynamic models that are tuned such that dynamic recordings are well explained by the model [240].

These different ways of representing the brain in terms of networks yield several avenues for investigation that are relevant to the discussion above. Specifically, network analyses have provided insight into the mechanisms of both function and dysfunction [18, 31, 241, 242], and modeling frameworks such as those described above are required in order to explain findings and develop testable predictions [243]. A particularly pertinent challenge is to understand to what extent structural connectivity—the structural property of the network—shapes emergent functional connectivity—properties of the dynamics—in both healthy and disease conditions [16, 17, 19, 244247]

Functional connectivity has been shown to be altered in myriad disorders of the brain, including epilepsy and Alzheimer’s disease [241, 248251]. It is therefore becoming an important marker for brain disorders, as well as a potentially important means of understanding disease and designing therapy [18, 21]. However, in order to link different data modalities and to develop effective and efficient treatment, it is crucial to understand why specific changes in dynamics occur. The reduction methods described herein could help in this direction by bridging fundamental properties of neurons into emergent properties of neuronal networks, which can then be coupled to build an understanding of mesoscopic or whole-brain dynamics [3].

We conclude this section with two very recent examples how the mean-field reductions used here have been used to understand the dynamics of macroscopic brain activities from experimental data. First, Weerasinghe et al. [252] employed the Kuramoto model and its mean-field reduction to develop new closed-loop approaches for deep brain stimulation to improve treat patients with essential tremor and Parkinson’s disease. Specifically, the Ott–Antonsen equations yield expressions for the mean-field response of an oscillator population, which can be compared with experimentally measured response curves obtained from patients [253]. The idea is that such a model-supported approach eventually yields efficient treatment strategies, for example, by stimulating at the optimal phase and amplitude to maximize efficacy and minimize side effects. Second, Byrne et al. [254] recently developed a novel brain model based on coupled populations of QIF neurons and use it in a number of neurobiological contexts, such as providing an understanding of the changes in power-spectra observed in EEG/MEG neuroimaging studies of motor-cortex during movement. Such a model is the first step to bridge the microscopic properties of individual neurons to macroscopic brain dynamics.

Conclusions and open problems

The mean-field descriptions presented in this review are able to bridge spatial scales in coupled oscillator networks since they provide explicit descriptions of the macroscopic dynamics in terms of microscopic quantities. This provides insights into how network coupling properties (for example, a neural connectome) relate to dynamical properties (and thus functional properties) of an oscillator network. Importantly, the equations are not just a black box, but tools from dynamical systems theory that allow us to study explicitly how the dynamics change as network parameters are varied. We conclude by highlighting three sets of challenges for future research.

The first set of challenges relates to the reductions themselves and the mathematics behind them; some of them were already discussed in Sect. 3.3, and further along the way. Phase oscillator networks that arise through phase reduction typically have nonsinusoidal coupling due to higher harmonics and nonadditive terms in the interactions. These can arise through strongly nonlinear oscillations or nonlinear interactions between oscillators; see [150, 151, 153] and other references above. Hence, the influence of such interactions on the mean-field reductions still needs to be clarified: While they could fail in certain instances [133], first results indicate that they may still provide useful information over some timescales [136]—further work in this direction is desirable. As an example, Thiem et al. [255] recently used manifold learning and a neural network to learn the Ott–Antonsen equations governing the Kuramoto model; these techniques are quite generally applicable. Real-world networks are often modeled as systems subject to noise. Here, we point to very recent results that extend the mean-field reductions presented here in these directions by using a “circular cumulants” approach [7375].

The second set of challenges concerns the relationship between the mean-field reductions, the underlying microscopic models, and real-world data in the context of neuroscience. How do LFP or EEG measurements relate to the mean-field variables that constitute the reduced system equations? Connectivity can be estimated via neural imaging techniques, but how does this data relate to the coupling strength and phase-lag parameters that appear in the Ott–Antonsen equations of coupled Kuramoto–Sakaguchi populations? Or how does data relate to the coupling parameters of the microscopic models that are compatible with the reduction? These questions become even more intricate for coupled populations of Theta neurons; cf. [254].

The last set of challenges goes well beyond the mean-field reductions presented here. Mathematical tools are helpful to describe the dynamics, but how do the dynamics relate to functional aspects of the (neural) oscillator network? How do we identify dynamics that are pathological, and validate and use models of these dynamics to predict treatment responses? On the large scale, some pathologies such as epilepsy reveal salient abnormal dynamics [25], but alterations in other conditions are more subtle, and therefore model-driven analyses could prove itself to be very useful in the clinical context [20, 250252]. Insights into these fundamental questions will allow one to make the mean-field reductions presented in this review even more useful to design targeted therapies for neural diseases.


  1. 1.

    Naturally, any mean-field approach (including the Ott–Antonsen reduction we consider here) that disregards (finite-size) fluctuations to analyze finite networks cannot capture dynamical effects where these fluctuations are a characteristic property. This includes the balanced state [256, 257].

  2. 2.

    Such networks may be thought of as “networks of networks” [258, 259].

  3. 3.

    Note, however, that the mesoscopic description in terms of collective variables of each subnetwork can have a “phase” and “amplitude” such as the mean phase and the amount of synchrony.

  4. 4.

    In Eq. (1) the coupling is through a pure first harmonic. However, the results presented here are generally valid for coupling through any pure single harmonic of higher order; see for example [155, 260].

  5. 5.

    The order “parameter” Z is an observable which encodes the state of the system, and should not be confused with a system parameter.

  6. 6.

    If only one population is considered, \(M=1\), we simply write \(\alpha_{\sigma\tau}=\alpha\) and \(K_{\sigma\tau}=K\); this corresponds to the Kuramoto–Sakaguchi model. Furthermore, if \(\alpha=0\), we recover the Kuramoto model (2). Here, we regard the number of populations M to be fixed; to take a limit \(M\to\infty\) one should assume that the coupling strengths \(K_{\sigma\tau}\) scale appropriately.

  7. 7.

    Such form of interactions typically arise in an additional averaging step performed after the phase reduction [43, 261].

  8. 8.

    One may also consider co-rotating frames with time-dependent frequencies. For example, for any given oscillator \((\tau, j)\) one can choose a co-rotating frame in which its phase \(\theta_{\tau, j}\) appears stationary via the transform \(\theta _{\sigma,k}\mapsto\theta_{\sigma,k}-\theta_{\tau,j}\). This transformation changes the structure of (4) but it does not affect the qualitative dynamics.

  9. 9.

    This convention is in line with the firing of the equivalent quadratic integrate and fire neuron introduced below.

  10. 10.

    If the oscillators are subject to noise, the continuity equation is a Fokker–Planck equation which contains an additional diffusive term [105, 133, 142, 156].

  11. 11.

    Transport equations are common in physics. There they are also known as the continuity equation (or Liouville equation in classical statistical physics describing the ensemble evolution in time) and play the important role of describing conservation laws. To visualize, in the context of fluid dynamics, the density in (10) plays the role of a mass density and (10) then implies that the total mass in the system is a conserved quantity [262].

  12. 12.

    Here we are making an implicit assumption on the regularity of \(H_{\sigma}\) on time (and potentially an additional parameter): \(H_{\sigma}\) has to be sufficiently smooth that (14) yields an \(a_{\sigma}(\omega,t)\) such that the residue theorem can be used to evaluate (15) at any time t.

  13. 13.

    This problem can however be solved by respecting the symmetries underlying the system, i.e., either by integrating the WS equations or the dynamic equations governing the Möbius transformations, which in turn can be used to compute trajectories for individual oscillators.

  14. 14.

    Compactly supported distributions of intrinsic frequencies have been approximated by rational distributions [263, 264], but it is not clear whether the limit is independent of the approximation.

  15. 15.

    The resulting frequency distribution is curiously similar to Norbert Wiener’s notion of the frequency distribution of brain waves around the alpha wave band; see, e.g., [265].

  16. 16.

    By symmetry there is a corresponding pattern in \(\textrm {S}\textrm {D}= \lbrace R_{1}=1, R_{2}<1 \rbrace\).

  17. 17.

    A heteroclinic trajectory between two distinct saddles is a solution that is attracted to one saddle as time increases and to the other saddle as time evolves backward.

  18. 18.

    Unfortunately, the term “network” has a double meaning here: on the one hand, we study oscillatory units which form networks through their (physical and functional) interactions, on the other hand, heteroclinic networks are abstract networks of dynamical states linked by heteroclinic trajectories which allow dynamical transitions.





interneuronal gamma


local field potentials




pyramidal-interneuronal gamma


Quadratic Integrate and Fire


saddle node bifurcation on an invariant circle




  1. 1.

    Winfree AT. The geometry of biological time. New York: Springer; 2001. (Interdisciplinary applied mathematics; vol. 12).

    Google Scholar 

  2. 2.

    Strogatz SH. Nature. 2001;410(6825):268.

    Article  Google Scholar 

  3. 3.

    Breakspear M. Nat Neurosci. 2017;20(3):340.

    Article  Google Scholar 

  4. 4.

    Liu C, Weaver DR, Strogatz SH, Reppert SM. Cell. 1997;91(6):855.

    Article  Google Scholar 

  5. 5.

    Buck J, Buck E. Science. 1968;159(3821):1319.

    Article  Google Scholar 

  6. 6.

    Gilpin W, Bull MS, Prakash M. Nat Rev Phys. 2020.

    Article  Google Scholar 

  7. 7.

    Collins JJ, Stewart I. J Nonlinear Sci. 1993;3(1):349.

    MathSciNet  Article  Google Scholar 

  8. 8.

    Strogatz SH, Abrams DM, McRobie A, Eckhardt B, Ott E. Nature. 2005;438(7064):43.

    Article  Google Scholar 

  9. 9.

    Strogatz SH, Kronauer RE, Czeisler CA. Am J Physiol. 1987;253(1 Pt 2):R172.

    Article  Google Scholar 

  10. 10.

    Leloup JC, Goldbeter A. BioEssays. 2008;30(6):590.

    Article  Google Scholar 

  11. 11.

    Smolen P, Byrne J. Encyclopedia of neuroscience. 2009.

    Google Scholar 

  12. 12.

    Zavala E, Wedgwood KC, Voliotis M, Tabak J, Spiga F, Lightman SL, Tsaneva-Atanasova K. Trends Endocrinol. Metab. 2019;30(4):244.

    Article  Google Scholar 

  13. 13.

    Ghosh AK, Chance B, Pye E. Arch Biochem Biophys. 1971;145(1):319.

    Article  Google Scholar 

  14. 14.

    Danø S, Sørensen PG, Hynne F. Nature. 1999;402(6759):320.

    Article  Google Scholar 

  15. 15.

    Massie TM, Blasius B, Weithoff G, et al.. Proc Natl Acad Sci USA. 2010;107(9):4236.

    Article  Google Scholar 

  16. 16.

    Honey CJ, Kotter R, Breakspear M, Sporns O. Proc Natl Acad Sci USA. 2007.

    Article  Google Scholar 

  17. 17.

    Honey CJ, Thivierge JP, Sporns O. NeuroImage. 2010;52(3):766.

    Article  Google Scholar 

  18. 18.

    Fornito A, Zalesky A, Breakspear M. Nat Rev Neurosci. 2015;16(3):159.

    Article  Google Scholar 

  19. 19.

    Bassett DS, Sporns O. Nat Neurosci. 2017;20(3):353.

    Article  Google Scholar 

  20. 20.

    Kuhlmann L, Lehnertz K, Richardson MP, Schelter B, Zaveri HP. Nat Rev Neurol. 2018;14(10):618.

    Article  Google Scholar 

  21. 21.

    Goodfellow M, Rummel C, Abela E, Richardson M, Schindler K, Terry J. Sci Rep. 2016;6:29215.

    Article  Google Scholar 

  22. 22.

    Strogatz SH. Sync: the emerging science of spontaneous order. London: Penguin; 2004.

    Google Scholar 

  23. 23.

    Glass L. Nature. 2001;410:277.

    Article  Google Scholar 

  24. 24.

    Dörfler F, Bullo F. Automatica. 2014;50(6):1539.

    MathSciNet  Article  Google Scholar 

  25. 25.

    Kahana MJ. J Neurosci. 2006;26(6):1669.

    Article  Google Scholar 

  26. 26.

    Lehnertz K, Geier C, Rings T, Stahn K. EPJ Nonlinear Biomed Phys. 2017;5:2.

    Article  Google Scholar 

  27. 27.

    Fell J, Axmacher N. Nat Rev Neurosci. 2011;12(2):105.

    Article  Google Scholar 

  28. 28.

    Fries P. Annu Rev Neurosci. 2009;32:209.

    Article  Google Scholar 

  29. 29.

    Wang XJ. Physiol Rev. 2010;90(3):1195.

    Article  Google Scholar 

  30. 30.

    Singer W, Gray CM, Gray Charles WS. Annu Rev Neurosci. 1995;18:555.

    Article  Google Scholar 

  31. 31.

    Fries P. Trends Cogn Sci. 2005;9(10):474.

    Article  Google Scholar 

  32. 32.

    Kirst C, Timme M, Battaglia D. Nat Commun. 2016;7:11061.

    Article  Google Scholar 

  33. 33.

    Deschle N, Daffertshofer A, Battaglia D, Martens EA. Front Appl Math Stat. 2019;5:28.

    Article  Google Scholar 

  34. 34.

    Marder E, Bucher D. Curr Biol. 2001;11:R986.

    Article  Google Scholar 

  35. 35.

    Smith JC, Ellenberger HH, Ballanyi K, Richter DW, Feldman JL. Science. 1991;254(5032):726.

    Article  Google Scholar 

  36. 36.

    Butera RJ, Rinzel J, Smith JC. J Neurophysiol. 1999;82(1):382.

    Article  Google Scholar 

  37. 37.

    Jones MW, Wilson MA. PLoS Biol. 2005;3(12):e402.

    Article  Google Scholar 

  38. 38.

    Uhlhaas PJ, Singer W. Neuron. 2006;52(1):155.

    Article  Google Scholar 

  39. 39.

    Hammond C, Bergman H, Brown P. Trends Neurosci. 2007;30(7):357.

    Article  Google Scholar 

  40. 40.

    Lehnertz K, Bialonski S, Horstmann MT, Krug D, Rothkegel A, Staniek M, Wagner T. J Neurosci Methods. 2009;183(1):42.

    Article  Google Scholar 

  41. 41.

    Rummel C, Goodfellow M, Gast H, Hauf M, Amor F, Stibal A, Mariani L, Wiest R, Schindler K. Neuroinformatics. 2013;11:159.

    Article  Google Scholar 

  42. 42.

    Słowiński P, Sheybani L, Michel CM, Richardson MP, Quairiaux C, Terry JR, Goodfellow M. eNeuro. 2019;6(4):ENEURO.0059-19.2019.

    Article  Google Scholar 

  43. 43.

    Ashwin P, Coombes S, Nicks R. J Math Neurosci. 2016.

    Article  Google Scholar 

  44. 44.

    Pietras B, Daffertshofer A. Phys Rep. 2019.

    Article  Google Scholar 

  45. 45.

    Ashwin P, Swift JW. J Nonlinear Sci. 1992;2(1):69.

    MathSciNet  Article  Google Scholar 

  46. 46.

    Hansel D, Mato G, Meunier C. Europhys Lett. 1993;23(5):367.

    Article  Google Scholar 

  47. 47.

    Hoppensteadt FC, Izhikevich EM. Weakly connected neural networks. New York: Springer; 1997. (Applied mathematical sciences; vol. 126).

    Google Scholar 

  48. 48.

    Brown E, Moehlis J, Holmes P. Neural Comput. 2004;16(4):673.

    Article  Google Scholar 

  49. 49.

    Nakao H. Contemp Phys. 2016;57(2):188.

    Article  Google Scholar 

  50. 50.

    Monga B, Wilson D, Matchen T, Moehlis, J. Biol Cybern. 2018.

    Article  Google Scholar 

  51. 51.

    Cabral J, Hugues E, Sporns O, Deco G. NeuroImage. 2011;57(1):130.

    Article  Google Scholar 

  52. 52.

    Luke TB, Barreto E, So P. Front Comput Neurosci. 2014;8:145.

    Article  Google Scholar 

  53. 53.

    Britz J, Van De Ville D, Michel CM. NeuroImage. 2010;52(4):1162.

    Article  Google Scholar 

  54. 54.

    Destexhe A, Sejnowski TJ. Biol Cybern. 2009;101(1):1.

    Article  Google Scholar 

  55. 55.

    Gupta S, Campa A, Ruffo S. Statistical physics of synchronization. Berlin: Springer; 2018.

    Google Scholar 

  56. 56.

    Börgers C, Kopell N. Neural Comput. 2003;15(3):509.

    Article  Google Scholar 

  57. 57.

    Buzsáki G, Wang XJ. Annu Rev Neurosci. 2012;35(1):203.

    Article  Google Scholar 

  58. 58.

    Wilson HR, Cowan JD. Biol Cybern. 1973;13(2):55.

    Article  Google Scholar 

  59. 59.

    Amari Si. Biol Cybern. 1977;27(2):77.

    Article  Google Scholar 

  60. 60.

    Coombes S, Byrne Á. In: Corinto F, Torcini A, editors. Nonlinear dynamics in computational neuroscience. Cham: Springer; 2019. p. 1–16.

    Google Scholar 

  61. 61.

    Strogatz SH. Nonlinear dynamics and chaos. Reading: Perseus Books Publishing; 1994.

    Google Scholar 

  62. 62.

    Izhikevich EM. Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge: MIT Press; 2007.

    Google Scholar 

  63. 63.

    Panaggio MJ, Abrams DM. Nonlinearity. 2015;28(3):R67.

    Article  Google Scholar 

  64. 64.

    Schöll E. Eur Phys J Spec Top. 2016;225(6–7):891.

    Article  Google Scholar 

  65. 65.

    Omel’chenko OE. Nonlinearity. 2018;31(5):R121.

    MathSciNet  Article  Google Scholar 

  66. 66.

    Porter M, Gleeson J. Dynamical systems on networks. Cham: Springer; 2016. (Frontiers in applied dynamical systems: reviews and tutorials; vol. 4).

    Google Scholar 

  67. 67.

    Rodrigues FA, Peron TKD, Ji P, Kurths J. Phys Rep. 2016;610:1.

    MathSciNet  Article  Google Scholar 

  68. 68.

    Pecora LM, Carroll TL. Phys Rev Lett. 1998;80(10):2109.

    Article  Google Scholar 

  69. 69.

    Barahona M, Pecora LM. Phys Rev Lett. 2002;89(5):054101.

    Article  Google Scholar 

  70. 70.

    Pereira T, Eldering J, Rasmussen M, Veneziani A. Nonlinearity. 2014;27(3):501.

    MathSciNet  Article  Google Scholar 

  71. 71.

    Holme P, Saramäki J. Phys Rep. 2012;519:97.

    Article  Google Scholar 

  72. 72.

    Bick C, Field MJ. Nonlinearity. 2017;30(2):558.

    MathSciNet  Article  Google Scholar 

  73. 73.

    Tyulkina IV, Goldobin DS, Klimenko LS, Pikovsky A. Phys Rev Lett. 2018;120:264101.

    Article  Google Scholar 

  74. 74.

    Goldobin DS, Tyulkina IV, Klimenko LS, Pikovsky A. Chaos. 2018;28(10):1.

    Article  Google Scholar 

  75. 75.

    Goldobin DS. Fluct Noise Lett. 2019;18(2):1940002.

    Article  Google Scholar 

  76. 76.

    Gottwald GA. Chaos. 2015;25(5):053111.

    MathSciNet  Article  Google Scholar 

  77. 77.

    Skardal PS, Restrepo JG, Ott E. Chaos. 2017;27:083121.

    MathSciNet  Article  Google Scholar 

  78. 78.

    Hannay KM, Forger DB, Booth V. Sci Adv. 2018;4(8):e1701047.

    Article  Google Scholar 

  79. 79.

    Winfree AT. J Theor Biol. 1967;16(1):15.

    Article  Google Scholar 

  80. 80.

    Kuramoto Y. Chemical oscillations, waves, and turbulence. New York: Springer; 1984.

    Google Scholar 

  81. 81.

    Strogatz SH. Physica D. 2000;143:1.

    MathSciNet  Article  Google Scholar 

  82. 82.

    Sakaguchi H, Kuramoto Y. Prog Theor Phys. 1986;76(3):576.

    Article  Google Scholar 

  83. 83.

    Cumin D, Unsworth CP. Physica D. 2007;226(2):181.

    MathSciNet  Article  Google Scholar 

  84. 84.

    Breakspear M, Heitmann S, Daffertshofer A. Front Human Neurosci. 2010;4:190.

    Article  Google Scholar 

  85. 85.

    Schmidt H, Petkov G, Richardson MP, Terry JR. PLoS Comput Biol. 2014;10(11):e1003947.

    Article  Google Scholar 

  86. 86.

    Ermentrout GB. Scholarpedia. 2008;3(3):1398.

    Article  Google Scholar 

  87. 87.

    Ermentrout GB, Kopell N. SIAM J Appl Math. 1986;46(2):233.

    MathSciNet  Article  Google Scholar 

  88. 88.

    Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. New York: Springer; 2010. (Interdisciplinary applied mathematics; vol. 35).

    Google Scholar 

  89. 89.

    Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge: Cambridge University Press; 2014.

    Google Scholar 

  90. 90.

    Monteforte M, Wolf F. Phys Rev Lett. 2010;105(26):268104.

    Article  Google Scholar 

  91. 91.

    Osan R, Ermentrout GB. Neurocomputing. 2001;38:789.

    Article  Google Scholar 

  92. 92.

    Ermentrout GB, Rubin J, Osan R. SIAM J Appl Math. 2002;62(4):1197.

    MathSciNet  Article  Google Scholar 

  93. 93.

    Luke TB, Barreto E, So P. Neural Comput. 2013;25(12):3207.

    MathSciNet  Article  Google Scholar 

  94. 94.

    Laing CR. Phys Rev E. 2014;90(1):010901.

    Article  Google Scholar 

  95. 95.

    Gutkin B. In: Encyclopedia of computational neuroscience. 2015. p. 2958–65.

    Google Scholar 

  96. 96.

    Latham PE, Richmond B, Nelson P, Nirenberg S. J Neurophysiol. 2000;83(2):808.

    Article  Google Scholar 

  97. 97.

    Hansel D, Mato G. Phys Rev Lett. 2001;86(18):4175.

    Article  Google Scholar 

  98. 98.

    Brunel N, Latham PE. Neural Comput. 2003;15(10):2281.

    Article  Google Scholar 

  99. 99.

    Kopell N, Ermentrout GB. Proc Natl Acad Sci USA. 2004;101(43):15482.

    Article  Google Scholar 

  100. 100.

    Laing CR. Chaos. 2009;19(1):013113.

    MathSciNet  Article  Google Scholar 

  101. 101.

    Omel’chenko OE. Nonlinearity. 2013;26(9):2469.

    MathSciNet  Article  Google Scholar 

  102. 102.

    Montbrió E, Pazó D, Roxin A. Phys Rev X. 2015;5(2):021028.

    Article  Google Scholar 

  103. 103.

    Pietras B, Daffertshofer A. Chaos. 2016;26(10):103101.

    MathSciNet  Article  Google Scholar 

  104. 104.

    Mardia KV, Jupp PE. Directional statistics. Hoboken: Wiley; 1999. (Wiley series in probability and statistics).

    Google Scholar 

  105. 105.

    Sakaguchi H. Prog Theor Phys. 1988;79(1):39.

    Article  Google Scholar 

  106. 106.

    Strogatz SH, Mirollo RE. J Stat Phys. 1991;63(3–4):613.

    Article  Google Scholar 

  107. 107.

    Lancellotti C. Transp Theory Stat Phys. 2005;34(7):523.

    MathSciNet  Article  Google Scholar 

  108. 108.

    Mirollo RE, Strogatz SH. J Nonlinear Sci. 2007;17(4):309.

    MathSciNet  Article  Google Scholar 

  109. 109.

    Carrillo JA, Choi YP, Ha SY, Kang MJ, Kim Y. J Stat Phys. 2014;156(2):395.

    MathSciNet  Article  Google Scholar 

  110. 110.

    Dietert H. J Math Pures Appl. 2016;105(4):451.

    MathSciNet  Article  Google Scholar 

  111. 111.

    Dietert H, Fernandez B, Gérard-Varet D. Commun Pure Appl Math. 2018;71(5):953.

    Article  Google Scholar 

  112. 112.

    Carrillo JA, Choi YP, Pareschi L. J Comput Phys. 2019;376:365.

    MathSciNet  Article  Google Scholar 

  113. 113.

    Medvedev GS. SIAM J Math Anal. 2014;46(4):2743.

    MathSciNet  Article  Google Scholar 

  114. 114.

    Chiba H, Medvedev GS. Discrete Contin Dyn Syst, Ser A. 2019;39:131.

    Article  Google Scholar 

  115. 115.

    Ott E, Antonsen TM. Chaos. 2008;18(3):037113.

    MathSciNet  Article  Google Scholar 

  116. 116.

    Ott E, Antonsen TM. Chaos. 2009;19(2):023117.

    MathSciNet  Article  Google Scholar 

  117. 117.

    Ott E, Hunt BR, Antonsen TM. Chaos. 2011;21(2):025112.

    MathSciNet  Article  Google Scholar 

  118. 118.

    Martens EA, Barreto E, Strogatz SH, Ott E, So P, Antonsen TM. Phys Rev E. 2009;79(2):026204.

    MathSciNet  Article  Google Scholar 

  119. 119.

    Pazó D, Montbrió E. Phys Rev E. 2009;80(4):046215.

    Article  Google Scholar 

  120. 120.

    Tsang KY, Mirollo RE, Strogatz SH, Wiesenfeld K. Physica D. 1991;48(1):102.

    MathSciNet  Article  Google Scholar 

  121. 121.

    Wiesenfeld K, Colet P, Strogatz SH. Phys Rev E. 1998;57(2):1563.

    Article  Google Scholar 

  122. 122.

    Watanabe S, Strogatz SH. Phys Rev Lett. 1993;70(16):2391.

    MathSciNet  Article  Google Scholar 

  123. 123.

    Watanabe S, Strogatz SH. Physica D. 1994;74(3–4):197.

    Article  Google Scholar 

  124. 124.

    Goebel CJ. Physica D. 1995;80(1–2):18.

    Article  Google Scholar 

  125. 125.

    Marvel SA, Mirollo RE, Strogatz SH. Chaos. 2009;19(4):043104.

    MathSciNet  Article  Google Scholar 

  126. 126.

    Stewart I. Int J Bifurc Chaos. 2011;21(6):1795.

    Article  Google Scholar 

  127. 127.

    Chen B, Engelbrecht JR, Mirollo RE. J Phys A, Math Theor. 2017;50(35):355101.

    Article  Google Scholar 

  128. 128.

    Engelbrecht JR, Mirollo R. Phys Rev Res. 2020;2:023057. arXiv:2002.07827.

    Article  Google Scholar 

  129. 129.

    Pikovsky A, Rosenblum M. Physica D. 2011;240(9–10):872.

    MathSciNet  Article  Google Scholar 

  130. 130.

    Pikovsky A, Rosenblum M. Phys Rev Lett. 2008;101:264103.

    Article  Google Scholar 

  131. 131.

    Laing CR. J Math Neurosci. 2018;8(1):4.

    Article  Google Scholar 

  132. 132.

    Bick C, Timme M, Paulikat D, Rathlev D, Ashwin P. Phys Rev Lett. 2011;107(24):244101.

    Article  Google Scholar 

  133. 133.

    Lai YM, Porter MA. Phys Rev E. 2013;88(1):012905.

    Article  Google Scholar 

  134. 134.

    Bick C, Ashwin P, Rodrigues A. Chaos. 2016;26(9):094814.

    MathSciNet  Article  Google Scholar 

  135. 135.

    Ashwin P, Bick C, Burylko O. Front Appl Math Stat. 2016;2(7):7.

    Article  Google Scholar 

  136. 136.

    Vlasov V, Rosenblum M, Pikovsky A. J Phys A, Math Theor. 2016;49(31):31LT02.

    Article  Google Scholar 

  137. 137.

    Gottwald GA. Chaos. 2017;27(10):101103.

    MathSciNet  Article  Google Scholar 

  138. 138.

    Smith LD, Gottwald GA. Chaos. 2019;29(9):093127.

    MathSciNet  Article  Google Scholar 

  139. 139.

    Mirollo RE. Chaos. 2012;22(4):043118.

    MathSciNet  Article  Google Scholar 

  140. 140.

    Kuznetsov YA. Elements of applied bifurcation theory. 3rd ed. New York: Springer; 2004. (Applied mathematical sciences; vol. 112).

    Google Scholar 

  141. 141.

    Brown E, Holmes P, Moehlis J. In: Perspectives and problems in nonlinear science: a celebratory volume in honor of Larry Sirovich. Berlin: Springer; 2003. p. 183–215.

    Google Scholar 

  142. 142.

    Crawford JD. J Stat Phys. 1994;74(5):1047.

    Article  Google Scholar 

  143. 143.

    Pietras B, Deschle N, Daffertshofer A. Phys Rev E. 2016;94(5):052211.

    Article  Google Scholar 

  144. 144.

    Aguiar MAD, Dias APS. Chaos. 2018;28(7):073105.

    MathSciNet  Article  Google Scholar 

  145. 145.

    Tanaka T, Aoyagi T. Phys Rev Lett. 2011;106(22):224101.

    Article  Google Scholar 

  146. 146.

    Levine JM, Bascompte J, Adler PB, Allesina S. Nature. 2017;546(7656):56.

    Article  Google Scholar 

  147. 147.

    Ariav G, Polsky A, Schiller J. J Neurosci. 2003;23(21):7750.

    Article  Google Scholar 

  148. 148.

    Polsky A, Mel BW, Schiller J. Nat Neurosci. 2004;7(6):621.

    Article  Google Scholar 

  149. 149.

    Memmesheimer RM. Proc Natl Acad Sci USA. 2010;107(24):11092.

    Article  Google Scholar 

  150. 150.

    Rosenblum M, Pikovsky A. Phys Rev Lett. 2007;98(6):064101.

    Article  Google Scholar 

  151. 151.

    Ashwin P, Rodrigues A. Physica D. 2016;325:14.

    MathSciNet  Article  Google Scholar 

  152. 152.

    Kralemann B, Pikovsky A, Rosenblum M. New J Phys. 2014;16:085013.

    Article  Google Scholar 

  153. 153.

    León I, Pazó D. Phys Rev E. 2019;100(1):012211.

    MathSciNet  Article  Google Scholar 

  154. 154.

    Hoppensteadt FC, Izhikevich EM. Phys Rev Lett. 1999;82(14):2983.

    Article  Google Scholar 

  155. 155.

    Skardal PS, Arenas A. Phys Rev Lett. 2019;122(24):248301.

    Article  Google Scholar 

  156. 156.

    Acebrón J, Bonilla L, Pérez Vicente C, et al.. Rev Mod Phys. 2005;77(1):137.

    Article  Google Scholar 

  157. 157.

    Pikovsky A, Rosenblum M. Chaos. 2015;25(9):097616.

    MathSciNet  Article  Google Scholar 

  158. 158.

    Lee WS, Ott E, Antonsen TM. Phys Rev Lett. 2009;103(4):044101.

    Article  Google Scholar 

  159. 159.

    Petkoski S, Spiegler A, Proix T, Aram P, Temprado JJ, Jirsa VK. Phys Rev E. 2016;94(1):012209.

    Article  Google Scholar 

  160. 160.

    Lohe MA. J Phys A, Math Theor. 2017;50(50):505101.

    Article  Google Scholar 

  161. 161.

    Schwartz AJ. Am J Math. 1963;85(3):453.

    Article  Google Scholar 

  162. 162.

    Golubitsky M, Stewart I. The symmetry perspective. Basel: Birkhäuser; 2002. (Progress in mathematics; vol. 200).

    Google Scholar 

  163. 163.

    Abrams DM, Strogatz SH. Phys Rev Lett. 2004;93(17):174102.

    Article  Google Scholar 

  164. 164.

    Kemeth FP, Haugland SW, Schmidt L, Kevrekidis IG, Krischer K. Chaos. 2016;26:094815.

    Article  Google Scholar 

  165. 165.

    Kemeth FP, Haugland SW, Krischer K. Phys Rev Lett. 2018;120(21):214101.

    Article  Google Scholar 

  166. 166.

    Kuramoto Y, Battogtokh D. Nonlinear Phenom Complex Syst. 2002;4:380.

    Google Scholar 

  167. 167.

    Martens EA, Bick C, Panaggio MJ. Chaos. 2016;26(9):094819.

    MathSciNet  Article  Google Scholar 

  168. 168.

    Abrams DM, Mirollo RE, Strogatz SH, Wiley DA. Phys Rev Lett. 2008;101(8):084103.

    Article  Google Scholar 

  169. 169.

    Martens EA, Panaggio MJ, Abrams DM. New J Phys. 2016;18(2):022002.

    Article  Google Scholar 

  170. 170.

    Palmigiano A, Geisel T, Wolf F, Battaglia D. Nat Neurosci. 2017;20(7):1014.

    Article  Google Scholar 

  171. 171.

    Laing CR. Chaos. 2012;22(4):043104.

    MathSciNet  Article  Google Scholar 

  172. 172.

    Laing CR, Rajendran K, Kevrekidis IG. Chaos. 2012;22(1):013132.

    MathSciNet  Article  Google Scholar 

  173. 173.

    Choe CU, Ri JS, Kim RS. Phys Rev E. 2016;94(3):032205.

    Article  Google Scholar 

  174. 174.

    Bick C, Panaggio MJ, Martens EA. Chaos. 2018;28(7):071102.

    MathSciNet  Article  Google Scholar 

  175. 175.

    Panaggio MJ, Abrams DM, Ashwin P, Laing CR. Phys Rev E. 2016;93(1):012218.

    Article  Google Scholar 

  176. 176.

    Ashwin P, Burylko O. Chaos. 2015;25:013106.

    MathSciNet  Article  Google Scholar 

  177. 177.

    Bick C, Ashwin P. Nonlinearity. 2016;29(5):1468.

    MathSciNet  Article  Google Scholar 

  178. 178.

    Bick C. J Nonlinear Sci. 2017;27(2):605.

    MathSciNet  Article  Google Scholar 

  179. 179.

    Bick C, Sebek M, Kiss IZ. Phys Rev Lett. 2017;119(16):168301.

    Article  Google Scholar 

  180. 180.

    Skardal PS. Eur Phys J B. 2019;92(2):46.

    MathSciNet  Article  Google Scholar 

  181. 181.

    Montbrió E, Kurths J, Blasius B. Phys Rev E. 2004;70(5):56125.

    Article  Google Scholar 

  182. 182.

    Hong H, Strogatz SH. Phys Rev E. 2012;85(5):056210.

    Article  Google Scholar 

  183. 183.

    Maistrenko YL, Penkovsky B, Rosenblum M. Phys Rev E. 2014;89(6):060901.

    Article  Google Scholar 

  184. 184.

    Hong H, Strogatz SH. Phys Rev Lett. 2011;106(5):054102.

    Article  Google Scholar 

  185. 185.

    Hong H, Strogatz SH. Phys Rev E. 2011;84(4):046202.

    Article  Google Scholar 

  186. 186.

    Martens EA. Phys Rev E. 2010;82(1):016216.

    MathSciNet  Article  Google Scholar 

  187. 187.

    Martens EA. Chaos. 2010;20(4):043122.

    MathSciNet  Article  Google Scholar 

  188. 188.

    Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, Vaadia E. Proc Natl Acad Sci USA. 1995;92(19):8616.

    Article  Google Scholar 

  189. 189.

    Tognoli E, Kelso JAS. Neuron. 2014;81(1):35.

    Article  Google Scholar 

  190. 190.

    Ashwin P, Timme M. Nature. 2005;436(7047):36.

    Article  Google Scholar 

  191. 191.

    Weinberger O, Ashwin P. Discrete Contin Dyn Syst, Ser B. 2018;23(5):2043.

    MathSciNet  Article  Google Scholar 

  192. 192.

    Rabinovich MI, Varona P, Selverston A, Abarbanel HDI. Rev Mod Phys. 2006;78(4):1213.

    Article  Google Scholar 

  193. 193.

    Rabinovich MI, Afraimovich VS, Bick C, Varona P. Phys Life Rev. 2012;9(1):51.

    Article  Google Scholar 

  194. 194.

    Hansel D, Mato G, Meunier C. Phys Rev E. 1993;48(5):3470.

    Article  Google Scholar 

  195. 195.

    Ashwin P, Orosz G, Wordsworth J, Townley S. SIAM J Appl Dyn Syst. 2007;6(4):728.

    MathSciNet  Article  Google Scholar 

  196. 196.

    Bick C. Phys Rev E. 2018;97(5):050201.

    Article  Google Scholar 

  197. 197.

    Shanahan M. Chaos. 2010;20(1):013108.

    MathSciNet  Article  Google Scholar 

  198. 198.

    Wildie M, Shanahan M. Chaos. 2012;22(4):043131.

    MathSciNet  Article  Google Scholar 

  199. 199.

    Deco G, Cabral J, Woolrich MW, Stevner AB, van Hartevelt TJ, Kringelbach ML. NeuroImage. 2017;152;538.

    Article  Google Scholar 

  200. 200.

    Park HJ, Friston K. Science. 2013;342(6158):1238411.

    Article  Google Scholar 

  201. 201.

    Komarov MA, Pikovsky A. Phys Rev Lett. 2013;110(13):134101.

    Article  Google Scholar 

  202. 202.

    Lück S, Pikovsky A. Phys Lett A. 2011;375(28–29):2714.

    Article  Google Scholar 

  203. 203.

    Komarov MA, Pikovsky A. Phys Rev E. 2011;84(1):016210.

    Article  Google Scholar 

  204. 204.

    Koch C. Biophysics of computation: information processing in single neurons. Oxford: Oxford University Press; 2004.

    Google Scholar 

  205. 205.

    So P, Luke TB, Barreto E. Physica D. 2014;267:16.

    MathSciNet  Article  Google Scholar 

  206. 206.

    Laing CR. In: Moustafa AA, editor. Computational models of brain and behavior. New York: Wiley; 2017. Chap. 37, p. 505–18.

    Google Scholar 

  207. 207.

    Devalle F, Roxin A, Montbrió E. PLoS Comput Biol. 2017;13(12):e1005881.

    Article  Google Scholar 

  208. 208.

    Ratas I, Pyragas K. Phys Rev E. 2017;96(4):042212.

    MathSciNet  Article  Google Scholar 

  209. 209.

    Ceni A, Olmi S, Torcini A, Angulo-Garcia D. arXiv:1908.07954 (2019).

  210. 210.

    Coombes S. SIAM J Appl Dyn Syst. 2008;7(3):1101.

    MathSciNet  Article  Google Scholar 

  211. 211.

    Laing CR. SIAM J Appl Dyn Syst. 2015;14(4):1899.

    MathSciNet  Article  Google Scholar 

  212. 212.

    Pietras B, Devalle F, Roxin A, Daffertshofer A, Montbrió E. Phys Rev E. 2019;100(4):042412.

    Article  Google Scholar 

  213. 213.

    Ariaratnam JT, Strogatz SH. Phys Rev Lett. 2001;86(19):4278.

    Article  Google Scholar 

  214. 214.

    Pazó D, Montbrió E. Phys Rev X. 2014;4(1):011009.

    Article  Google Scholar 

  215. 215.

    Schultheiss NW, Prinz AA, Butera RJ. Phase response curves in neuroscience: theory, experiment, and analysis. Berlin: Springer; 2011.

    Google Scholar 

  216. 216.

    Gallego R, Montbrió E, Pazó D. Phys Rev E. 2017;96(4):042208.

    Article  Google Scholar 

  217. 217.

    Dumont G, Ermentrout GB, Gutkin B. Phys Rev E. 2017;96(4):042311.

    Article  Google Scholar 

  218. 218.

    Laing CR. Chaos. 2016;26(9):094802.

    MathSciNet  Article  Google Scholar 

  219. 219.

    Esnaola-Acebes JM, Roxin A, Avitabile D, Montbrió E. Phys Rev E. 2017;96(5):052407.

    Article  Google Scholar 

  220. 220.

    Byrne Á, Avitabile D, Coombes S. Phys Rev E. 2019;99(1):012313.

    Article  Google Scholar 

  221. 221.

    Laing CR, Omel’chenko O. Chaos. 2020;30(4):043117.

    MathSciNet  Article  Google Scholar 

  222. 222.

    Chandra S, Hathcock D, Crain K, Antonsen TM, Girvan M, Ott E. Chaos. 2017;27(3):033102.

    MathSciNet  Article  Google Scholar 

  223. 223.

    Blasche C, Means S, Laing CR. J Comput Dyn. 2020;to appear. arXiv:2004.00206.

  224. 224.

    Laing CR, Bläsche C. Biol Cybern. 2020.

    Article  Google Scholar 

  225. 225.

    Schmidt H, Avitabile D, Montbrió E, Roxin A. PLoS Comput Biol. 2018;14(9):1.

    Article  Google Scholar 

  226. 226.

    Di Volo M, Torcini A. Phys Rev Lett. 2018;121(12):128301.

    Article  Google Scholar 

  227. 227.

    Dumont G, Gutkin B. PLoS Comput Biol. 2019;15(5):e1007019.

    Article  Google Scholar 

  228. 228.

    Bi H, Segneri M, di Volo M, Torcini A. Phys Rev Res. 2020;2(1):013042.

    Article  Google Scholar 

  229. 229.

    Keeley S, Byrne Á, Fenton A, Rinzel J. J Neurophysiol. 2019;121(6):2181.

    Article  Google Scholar 

  230. 230.

    Devalle F, Montbrió E, Pazó D. Phys Rev E. 2018;98(4):042214.

    MathSciNet  Article  Google Scholar 

  231. 231.

    Ratas I, Pyragas K. Phys Rev E. 2018;98(5):052224.

    MathSciNet  Article  Google Scholar 

  232. 232.

    Akao A, Shirasaka S, Jimbo Y, Ermentrout B, Kotani K. arXiv:1903.12155 (2019).

  233. 233.

    Jonmohamadi Y, Poudel G, Innes C, Jones R. NeuroImage. 2014;101:720.

    Article  Google Scholar 

  234. 234.

    Hassan M, Dufor O, Merlet I, Berrou C, Wendling F. PLoS ONE. 2014;9(8):e105041.

    Article  Google Scholar 

  235. 235.

    Stankovski T, Pereira T, McClintock PVE, Stefanovska A. Rev Mod Phys. 2017;89(4):045001.

    Article  Google Scholar 

  236. 236.

    Timme M, Casadiego J. J Phys A. 2014;47(34):343001.

    MathSciNet  Article  Google Scholar 

  237. 237.

    Friston KJ. Brain Connect. 2011;1(1):13.

    MathSciNet  Article  Google Scholar 

  238. 238.

    Wang HE, Friston KJ, Bénar CG, Woodman MM, Chauvel P, Jirsa V, Bernard C. NeuroImage. 2018;166:167.

    Article  Google Scholar 

  239. 239.

    Garcés P, Pereda E, Hernández-Tamames JA, Del-Pozo F, Maestú F, Ángel Pineda-Pardo J. Hum Brain Mapp. 2016;37(1):20.

    Article  Google Scholar 

  240. 240.

    Valdes-Sosa PA, Roebroeck A, Daunizeau J, Friston K. NeuroImage. 2011;58(2):339.

    Article  Google Scholar 

  241. 241.

    Stam CJ. Nat Rev Neurosci. 2014;15(10):683.

    Article  Google Scholar 

  242. 242.

    Bastos AM, Vezoli J, Fries P. Curr Opin Neurobiol. 2015;31:173.

    Article  Google Scholar 

  243. 243.

    Bassett DS, Zurn P, Gold JI. Nat Rev Neurosci. 2018;19(9):566.

    Article  Google Scholar 

  244. 244.

    Senden M, Deco G, De Reus MA, Goebel R, Van Den Heuvel MP. NeuroImage. 2014;96:174.

    Article  Google Scholar 

  245. 245.

    Demirtaş M, Falcon C, Tucholka A, Gispert JD, Molinuevo JL, Deco G. NeuroImage Clin. 2017;16:343.

    Article  Google Scholar 

  246. 246.

    Misic B, Betzel RF, Reus MAD, Heuvel MPVD, Berman MG, Mcintosh AR, Sporns O. Cereb Cortex. 2016;26:3285.

    Article  Google Scholar 

  247. 247.

    Shen K, Hutchison RM, Bezgin G, Everling S, McIntosh AR. J Neurosci. 2015;35(14):5579.

    Article  Google Scholar 

  248. 248.

    Dauwels J, Vialatte F, Musha T, Cichocki A. NeuroImage. 2010;49(1):668.

    Article  Google Scholar 

  249. 249.

    Lehnertz K, Ansmann G, Bialonski S, Dickten H, Geier C, Porz S. Physica D. 2014;267:7.

    MathSciNet  Article  Google Scholar 

  250. 250.

    Schmidt H, Woldman W, Goodfellow M, Chowdhury FA, Koutroumanidis M, Jewell S, Richardson MP, Terry JR. Epilepsia. 2016;57(10):e200.

    Article  Google Scholar 

  251. 251.

    Tait L, Stothart G, Coulthard E, Brown JT, Kazanina N, Goodfellow M. Clin Neurophysiol. 2019;130(9):1581.

    Article  Google Scholar 

  252. 252.

    Weerasinghe G, Duchet B, Cagnan H, Brown P, Bick C, Bogacz R. PLoS Comput Biol. 2019;15(8):e1006575.

    Article  Google Scholar 

  253. 253.

    Cagnan H, Pedrosa D, Little S, Pogosyan A, Cheeran B, Aziz T, Green A, Fitzgerald J, Foltynie T, Limousin P, Zrinzo L, Hariz M, Friston KJ, Denison T, Brown P. Brain. 2017;140(1):132.

    Article  Google Scholar 

  254. 254.

    Byrne Á, O’Dea R, Forrester M, Ross J, Coombes S. J Neurophysiol. 2020;123:726.

    Article  Google Scholar 

  255. 255.

    Thiem TN, Kooshkbaghi M, Bertalan T, Laing CR, Kevrekidis IG. Front Comput Neurosci. 2020;14: 36.

    Article  Google Scholar 

  256. 256.

    van Vreeswijk C, Sompolinsky H. Science. 1996;274(5293):1724.

    Article  Google Scholar 

  257. 257.

    van Vreeswijk C, Sompolinsky H. Neural Comput. 1998;10(6):1321.

    Article  Google Scholar 

  258. 258.

    Barreto E, Hunt B, Ott E, So P. Phys Rev E. 2008;77(3):036107.

    MathSciNet  Article  Google Scholar 

  259. 259.

    Kivelä M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA. J Complex Netw. 2014;2(3):203.

    Article  Google Scholar 

  260. 260.

    Gong CC, Pikovsky A. Phys Rev E. 2019;100(6):062210.

    Article  Google Scholar 

  261. 261.

    Swift JW, Strogatz SH, Wiesenfeld K. Physica D. 1992;55(3–4):239.

    MathSciNet  Article  Google Scholar 

  262. 262.

    Landau L, Lifshitz E. Course of theoretical physics. Volume 6: fluid mechanics. London: Pergamon Press; 1959.

    Google Scholar 

  263. 263.

    Skardal PS. Phys Rev E. 2018;98(2):022207.

    MathSciNet  Article  Google Scholar 

  264. 264.

    Pietras B, Deschle N, Daffertshofer A. Phys Rev E. 2018;98(6):062219.

    MathSciNet  Article  Google Scholar 

  265. 265.

    Strogatz SH. In: Frontiers in mathematical biology. Berlin: Springer; 1994. p. 122–38. (Lecture notes in biomathematics; vol. 100).

    Google Scholar 

Download references


We are grateful to R. Bogacz, B. Duchet, B. Jüttner, K. Lehnertz, and W. Woldman for helpful feedback on the manuscript, and to R. Mirollo and J. Engelbrecht for helpful discussions. We thank the anonymous referees for careful reading and suggestions that helped to improve this review. This article is part of the research activity (EAM, CL) of the Advanced Study Group 2017 “From Microscopic to Collective Dynamics in Neural Circuits” held at Max Planck Institute for the Physics of Complex Systems in Dresden (Germany).

Availability of data and materials

No new data was generated in this study.


MG gratefully acknowledges the financial support of the EPSRC via grants EP/P021417/1 and EP/N014391/1. MG also acknowledges the generous support of a Wellcome Trust Institutional Strategic Support Award ( via grant WT105618MA.

Author information




All authors contributed to writing the manuscript. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Christian Bick or Erik A. Martens.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare to have no competing interests.

Consent for publication

Not applicable.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bick, C., Goodfellow, M., Laing, C.R. et al. Understanding the dynamics of biological and neural oscillator networks through exact mean-field reductions: a review. J. Math. Neurosc. 10, 9 (2020).

Download citation


  • Network dynamics
  • Coupled oscillators
  • Neural networks
  • Mean-field reductions
  • Ott–Antonsen reduction
  • Watanabe–Strogatz reduction
  • Kuramoto model
  • Winfree model
  • Theta neuron model
  • Quadratic integrate-and-fire neurons
  • Neural masses
  • Structured networks