 Review
 Open Access
 Published:
Understanding the dynamics of biological and neural oscillator networks through exact meanfield reductions: a review
The Journal of Mathematical Neuroscience volume 10, Article number: 9 (2020)
Abstract
Many biological and neural systems can be seen as networks of interacting periodic processes. Importantly, their functionality, i.e., whether these networks can perform their function or not, depends on the emerging collective dynamics of the network. Synchrony of oscillations is one of the most prominent examples of such collective behavior and has been associated both with function and dysfunction. Understanding how network structure and interactions, as well as the microscopic properties of individual units, shape the emerging collective dynamics is critical to find factors that lead to malfunction. However, many biological systems such as the brain consist of a large number of dynamical units. Hence, their analysis has either relied on simplified heuristic models on a coarse scale, or the analysis comes at a huge computational cost. Here we review recently introduced approaches, known as the Ott–Antonsen and Watanabe–Strogatz reductions, allowing one to simplify the analysis by bridging small and large scales. Thus, reduced model equations are obtained that exactly describe the collective dynamics for each subpopulation in the oscillator network via few collective variables only. The resulting equations are nextgeneration models: Rather than being heuristic, they exactly link microscopic and macroscopic descriptions and therefore accurately capture microscopic properties of the underlying system. At the same time, they are sufficiently simple to analyze without great computational effort. In the last decade, these reduction methods have become instrumental in understanding how network structure and interactions shape the collective dynamics and the emergence of synchrony. We review this progress based on concrete examples and outline possible limitations. Finally, we discuss how linking the reduced models with experimental data can guide the way towards the development of new treatment approaches, for example, for neurological disease.
Introduction
Many systems in neuroscience and biology are governed on different levels by interacting periodic processes [1]. Networks of coupled oscillators provide models for such systems. Each node in the network is an oscillator (a dynamical process) and the network structure encodes which oscillators interact with each other [2]. In neuroscience, individual oscillators could be single neurons in microcircuits or neural masses on a more macroscopic level [3]. Other prominent examples in biology include cells in heart tissue [4], flashing fireflies [5], the dynamics of cilia and flagella [6], gait patterns of animals [7] or humans [8], cells in the suprachiasmatic nucleus in the brain generating the master clock for the circadian rhythm [9–11], hormone rhythms [12], suspensions of yeast cells undergoing metabolic oscillations [13, 14], and life cycles of phytoplankton in chemostats [15].
The functionality—whether function or dysfunction—of these networks depends on the collective dynamics of the interacting oscillatory nodes. Hence, one major challenge is to understand how the underlying network shapes these collective dynamics. In particular, one would like to understand how the interplay of network properties (for example, coupling topology and the strength of interactions) and characteristics of the individual nodes shape the emergent dynamics. The question of relating network structure and dynamics is particularly pertinent in the study of largescale brain dynamics: For example, one can investigate how emergent functional connectivity (a dynamical property) arises from specific structural connectomes [16, 17], and how each of these relates to cognition or disease. Progress in this direction not only aims to identify how healthy or pathological dynamics is linked to the network structure, but also to develop new treatment approaches [18–21].
One of the most prominent collective behaviors of an oscillator network occurs when nodes synchronize and oscillate in unison [22–24]; indeed, most of the examples given above display synchrony in some form which appears to be essential to the proper functioning of biological life processes. Here we think of synchrony in a general way: It can come in many varieties, including phase synchrony where the state of different oscillators align exactly, or frequency synchrony where the oscillators’ frequencies coincide. Synchrony may be global across the entire network or localized in a particular part—the rest of the network being nonsynchronized—thus giving rise to synchrony patterns. How exactly the dynamics of synchrony patterns in an oscillator network relate to its functional properties is still not fully understood. In the brain, there are a wide range of rhythms but the presence of dominant rhythms in different frequency bands indicate that some level of synchrony is common at multiple scales [25, 26]. Indeed, synchrony has been associated with solving functional tasks including, but not limited to, memory [27], computational functions [28], cognition [29], attention [30, 31], routing of information [31–33], control of gait and motion [34], or breathing [35, 36]. As a specific example, coordination of dynamics at the theta frequency (4–12 Hz) between hippocampus and frontal cortex is enhanced in spatial memory tasks [37]. At the same time, abnormal synchrony patterns are associated with malfunction in disorders such as epilepsy and Parkinson’s disease [38–41]; evolving patterns of synchrony can for example be observed in electroencephapholographic (EEG) recordings throughout epileptogenesis in mice [42].
Using a detailed model of each node and a large number of nodes in the network, the task of relating network structure and dynamics is daunting. Hence, simplifying analytical reduction methods are needed that—rather than being purely computational—yield a mechanistic understanding of the inherent processes leading to a certain dynamic macroscopic behavior. If many biologically relevant state variables are considered in a microscopic model, each node is represented by a highdimensional dynamical system by itself. Hence, a common approach is to simplify the description of each oscillatory node to its simplest form, a phase oscillator; in the reduced system the state of each oscillator is given by a single periodic phase variable that captures the state of the periodic process. In this case, biologically relevant details are captured by the evolution of the phase variable and its interaction with the phases of the other nodes. There are two important ways to get to a phase description of an oscillator network, both of which are common tools used, for example, in computational neuroscience; see [43, 44] for recent reviews. First, under the assumption of weak coupling one can go through a process of phase reduction to obtain a phase description [45–50]. Second, one can—based on the biophysical properties of the system—impose a phase model such as the Kuramoto model [51] or a network of Theta neurons [52].
The main topic of this paper is an introduction to and a review of recent advances in exact meanfield reductions for networks of coupled oscillators. The main achievement is that for certain classes of oscillator networks, it is possible to replace a large number of nodes by a collective or meanfield variable that describe the network evolution exactly—thereby reducing the complexity of the problem immensely. In the neuroscientific context, each subpopulation may represent different populations of neurons that may exhibit temporal patterns of synchronization or activity [16, 17, 53]. Of course, meanfield approaches motivated by statistical physics have a long history in computational neuroscience to approximate the dynamics of large ensembles of units; see, for example, [54, 55] and references therein. They have been useful to elucidate, for example, dynamical mechanisms behind the emergence of rhythms in the gamma frequency band, such as the emergence of pyramidalinterneuronal gamma (PING) rhythm [56] or the interplay between different brain areas (for example, through phasephase, phaseamplitude and amplitude comodulation) that can lead to frequency entrainment [57]. In terms of classical meanfield approaches, the pioneering works by Wilson and Cowan [58] and Amari [59] stand out: They derived heuristic equations for average neural population dynamics that are still widely used in neural modeling. Specifically, such models disregard fluctuations of individual units^{Footnote 1} and arrive at equations that approximate the evolution of means. By contrast, the exact meanfield reductions we discuss here, the Ott–Antonsen reduction and the Watanabe–Strogatz reduction, can be employed not only for infinite networks also for networks of finitely many oscillators. While these reductions only apply to specific classes of systems—and from a mathematical perspective reflect the special structure of these systems—they include models that have been widely used in neuroscience and beyond, such as the Kuramoto model. Compared to heuristic meanfield approximations, the resulting reduced equations are exactly derived from the microscopic equations of individual oscillators and thus capture properties of individual oscillators; because of this property these reduced equations have been referred to as being nextgeneration models [60]. Employing these models in modeling tasks provides a powerful opportunity to bridge the dynamics on microscopic and macroscopic scales.
To illustrate the meanfield reductions and their applicability, we focus here on networks that are organized into distinct (sub)populations because of their practical importance.^{Footnote 2} The meanfield reductions allow one to replace each subnetwork by a (lowdimensional) set of collective variables to obtain a set of dynamical equations for these variables. This set of meanfield equations describes the system exactly. For the classical Kuramoto model, which is widely used to understand synchronization phenomena, we will see below that the collective state is captured by a twodimensional meanfield variable that encodes the synchrony of the population. Reducing to a set of meanfield equations provides a simplified—but often still sufficiently complex—description of the network dynamics that can be analyzed by using dynamical systems techniques [61, 62]. We will outline the classes of models for which the meanfield reductions apply and illustrate how these reduction techniques have been instrumental in the last decade to illuminate how the network properties relate to dynamical phenomena. We give a number of concrete examples, from Kuramoto’s problem about the emergence of synchrony in oscillator populations to the emergence of PING rhythms based on microscopic properties of neuronal networks.
There are many important questions and aspects that we cannot touch upon in this review, and we refer to already existing reviews and literature instead. First, we only consider oscillator networks where each (microscopic) node has a firstorder description by a single phase variable. We will not cover other microscopic models such as secondorder phase oscillators or oscillators with a phase and amplitude^{Footnote 3} which can give rise to richer dynamics. Second, we do not comment on the validity of a phase reduction; for more information see for example [47, 50]. Third, the reductions we discuss have been essential to understand the emergence of synchrony patterns where coherence and incoherence coexist, also known as “chimeras.” Here, we only mention results relevant to the dynamics of coupled oscillator populations and refer to [63–65] for recent reviews on chimeras. Fourth, the results mentioned here relate to results from network science [66, 67]. In particular, properties of the graph underlying the dynamical network relate to synchronization dynamics [68–70]. Moreover, we also typically assume that the network structure is static and does not evolve over time. However, timedependent network structures are clearly of interest—in particular in the context of plastic neural connectivity and neurological disease. An approach to these issues from the perspective of network science are temporal networks [71] while asynchronous networks take a more dynamical point of view; see [72] and references therein. Fifth, we restrict ourselves to deterministic dynamics where noise is absent. From a mathematical point of view, noise can simplify the analysis and recent results show that similar reduction methods apply [73–75]. Finally, it is worth noting that other reduction approaches for oscillator networks have recently been explored [76–78].
This review is written with a diverse readership in mind, ranging from mathematicians to computational biologists who want to use the reduced equations for modeling. In fact, this review is intended to have aspects of a tutorial and to provide an introduction to the Ott–Antonsen and Watanabe–Strogatz reductions as exact meanfield reductions and outlining what type of questions they have been instrumental in answering: We include three explicit examples how these meanfield reductions can be helpful in giving insights into the collective dynamics of (neuronal) oscillator networks.
In the following, we provide an outline how to approach this paper. The next section sets the stage by introducing the notion of a sinusoidally coupled network and we summarize the main oscillator models we relate to throughout the paper; these include the Kuramoto model and networks of Theta neurons (which are equivalent to Quadratic Integrate and Fire (QIF) neurons). In the third section, we give a general theory for the meanfield reductions and discuss their limitations: The methods include the Ott–Antonsen reduction for the meanfield limit of nonidentical oscillators and the Watanabe–Strogatz reduction for finite or infinite networks of identical oscillators. This section includes a certain level of mathematical detail to understand the ideas behind the derivation of the reduced equations (mathematically dense sections are marked with the symbol “*” and may be skipped at first reading). If you are mainly interested in applying the reduced equations, you may want to skip ahead to Sects. 3.1.2 and 3.2.2, which summarize the reduced equations for the models we study throughout the paper. In the fourth section, we apply the reductions and emphasize how they are useful to understand how synchrony and patterns of synchrony emerge in such oscillator networks. This includes a number of concrete examples. Since most of these considerations are theoretical and computational, we discuss in the last section how the meanfield reductions can be used to solve neuroscientific problems and be linked with experimental data. We conclude with some remarks and highlighting a number of open problems.
List of symbols
The following symbols will be used throughout this paper.
 \(\mathbb {N}\):
The positive integers
 \(\mathbb {T}\):
The circle of all phases \(\mathbb {R}/2\pi \mathbb {Z}\) (or \([0, 2\pi ]\) with \(0\equiv2\pi\))
 \(\mathbb {C}\):
The complex numbers
 i:
Imaginary unit \(\sqrt{1}\)
 \(\operatorname {Re}(w )\), \(\operatorname {Im}(w )\):
Real part and imaginary part of a complex number \(w\in \mathbb {C}\)
 w̄:
Complex conjugate of \(w\in \mathbb {C}\)
 M:
Number of oscillator populations in the network
 σ, τ:
Population indices in \(\lbrace 1,\dotsc, M\rbrace\)
 N:
Number of oscillators in each population
 \(k,j,l,\dots\):
Oscillator indices in \(\lbrace1,\dotsc ,N\rbrace\)
 \(\theta_{\sigma,k}\):
Phase of oscillator k in population σ
 κ, \(\kappa^{\text{GJ}}\), \(\kappa^{\mathrm {g}}\):
Coupling strength between neural oscillators
 \(Z_{\sigma}\):
Kuramoto order parameter of population σ
 \(R_{\sigma}\):
The level of synchrony \(\vert Z_{\sigma} \vert \) of population σ
 \(z_{\sigma}\), \(\varPsi_{\sigma}\):
Bunch variables of population σ
 ẋ:
The time derivative \(\frac{\textrm {d}x}{\textrm {d}t}\) of x
Sinusoidally coupled phase oscillator networks
The state of each node in a phase oscillator network is given by a single phase variable. Such networks may be obtained through a phase reduction or may be abstract models in their own right as in the case of the Theta neuron below. Consider a population σ of N oscillators where the state of oscillator k is given by a phase \(\theta_{\sigma,k}\in \mathbb {T}:=\mathbb {R}/2\pi \mathbb {Z}\); if there is only a single population, we drop the index σ. Without input, the phase of each oscillator \((\sigma, k)\) advances at its intrinsic frequency \(\omega_{\sigma, k}\in \mathbb {R}\). Input to oscillator \((\sigma, k)\) is determined by a field \(H_{\sigma ,k}(t)\in \mathbb {C}\) and modulated by a sinusoidal function; this field could be an external drive or network interactions between oscillators both within population σ or other populations τ. Specifically, we consider oscillator networks whose phases evolve according to
Since the effect of the field on oscillator \((\sigma, k)\) is mediated by a function with exactly one harmonic, \(e^{i\theta_{\sigma,k}}\), we call the oscillator populations sinusoidally coupled.^{Footnote 4}
While we allow the intrinsic frequency and the driving field to depend on the oscillator to a certain extent (i.e., oscillators are nonidentical), we will henceforth assume that all oscillators within any given population σ otherwise are (statistically) indistinguishable, i.e., the properties of each oscillator in a given population are determined by the same distribution. Specifically, suppose that the properties of each oscillator are determined by a certain parameter \(\eta_{\sigma,k}\). This is for example the case for the Theta neurons described further below, each of which has an individual level of excitability as a parameter. Let us formulate this more precisely. Suppose that we let both the intrinsic frequencies and the field be functions of this parameter, i.e., \(\omega_{\sigma, k} = \omega_{\sigma}(\eta_{\sigma,k})\), \(H_{\sigma,k}(t) = H_{\sigma}(t; \eta_{\sigma,k})\). The oscillators of a given population are then indistinguishable if, for a given population σ, all \(\eta _{\sigma,k}\) are random variables sampled from a single probability distribution with density \(h_{\sigma}(\eta)\). In the special case that \(\eta_{\sigma,k}=\eta_{\sigma,j}\) for \(j\neq k\) (in this case \(h_{\sigma}\) is a deltadistribution) we say that the oscillators are identical.
Phase oscillator networks of the form (1) include a range of wellknown (and wellstudied) models. These range from particular cases of Winfree’s model [79] to neuron models. In the following we discuss some important examples that we will revisit in more detail throughout this paper.
The Kuramoto model
Kuramoto originally studied synchronization in a network of N globally coupled nonidentical (but indistinguishable) phase oscillators [80]; see [81] for an excellent survey of the problem and its historical background. Kuramoto originally investigated the onset of synchronization in a network composed of only a single population of oscillators indexed by \(k\in \lbrace1, \dotsc, N\rbrace\) with phases \(\theta_{k}\) (here we drop the population index σ). The oscillator phases evolve according to
with distinct intrinsic frequencies \(\omega_{k}\) that are sampled from some unimodal frequency distribution. Here the parameter K is the coupling strength between oscillators and the coupling is mediated by the sine of the phase difference between oscillators. If coupling is absent (\(K=0\)), each oscillator advances with its intrinsic frequency \(\omega_{k}\).
The macroscopic state of the population is characterized by the complexvalued Kuramoto order parameter^{Footnote 5}
representing the mean of all phases on the unit circle. Its magnitude \(R= \vert Z\vert \) describes the level of synchronization of the oscillator population, see Fig. 1: On the one hand, \(R=1\) if and only if all oscillators are phase synchronized, that is, \(\theta_{k}=\theta_{j}\) for all k and j; on the other hand, we have \(R=0\) if, for example, the oscillators are evenly distributed around the circle. The argument ϕ of the Kuramoto order parameter Z (which is welldefined for \(Z\neq0\)) describes the “average phase” of all oscillators, that is, it describes the average position of the oscillator crowd on the circle of phases.
Kuramoto observed the following macroscopic behavior: For K small, the system converges to an incoherent stationary state with \(R\approx 0\). As K is increased past a critical coupling strength \(K_{c}\), the system settles down to a state with partial synchrony, \(R>0\). As the coupling strength is further increased, \(K\to\infty\), oscillators become more and more synchronized, \(R\to1\).
The Kuramoto model (2) is an example of a sinusoidally coupled phase oscillator network. Using Euler’s identity \(e^{i\phi }=\cos(\phi)+i\sin(\phi)\), we have
where the Kuramoto order parameter \(Z(t) = Z(\theta_{1}(t), \dotsc , \theta_{N}(t))\), as defined in (3), depends on time through the phases. Hence, the Kuramoto model (2) is equivalent to (1) with \(H(t)=KZ(t)\) and the interactions between oscillators are solely determined by the Kuramoto order parameter \(Z(t)\). Such a form of network interaction is also called meanfield coupling since the drive \(H(t)\) to a single oscillator is proportional to a mean field, that is, the average formed from the states of all oscillators in the network.
Problem 1
How can meanfield reductions elucidate Kuramoto’s original problem of the onset of synchronization in an infinitely large population of oscillators? We will revisit this problem in Example 1 below.
Populations of Kuramoto–Sakaguchi oscillators
Sakaguchi generalized Kuramoto’s model by introducing an additional phaselag (or phasefrustration) parameter which approximates a time delay in the interactions between oscillators [63, 82]. While Sakaguchi originally considered a single population of oscillators, here we generalize to multiple interacting populations. Specifically, we consider the dynamics of M populations of N Kuramoto–Sakaguchi oscillators, where the phase of oscillator k in population σ evolves according to
and where \(K_{\sigma\tau}\geq0\) is the coupling strength and \(\alpha_{\sigma\tau}\) is the phase lag between populations σ and τ.^{Footnote 6} The function \(g_{\sigma\tau}(\phi)=K_{\sigma\tau}\sin(\phi \alpha_{\sigma\tau})\) mediates the interactions between oscillators, and we refer to it as the coupling function; later on we will also briefly touch upon what happens if the sine function is replaced by a general periodic coupling function. As in the Kuramoto model, an important point is that the influence between oscillators \((\tau,j)\) and \((\sigma,k)\) depends only on their phase difference (rather than explicitly on their phases).^{Footnote 7} Thus, this form of interaction only depends on the relative phase between oscillator pairs rather than the absolute phases. An important consequence is that the dynamics of Eqs. (4) do not change if we consider all phases in a different reference frame. For example, going into a reference frame rotating at constant frequency \(\omega _{\mathrm {f}}\in \mathbb {R}\) corresponds to the transformation \(\theta_{\sigma,k}\mapsto\theta_{\sigma,k}\omega _{\mathrm {f}}t\), only shifts all intrinsic frequencies by \(\omega _{\mathrm {f}}\) rather than changing the dynamics qualitatively.^{Footnote 8}
The network (4) of M interacting populations of Kuramoto–Sakaguchi oscillators is a sinusoidally coupled oscillator network. The amount of synchrony in population σ is determined by the Kuramoto order parameter (3) for population σ,
Combining coupling strength and phase lag, we define the complex interaction parameter \(c_{\sigma\tau} := K_{\sigma\tau} e^{i\alpha _{\sigma\tau}}\) between populations σ and τ. By the same calculation as above, the network (4) is equivalent to (1) with constant intrinsic frequencies \(\omega_{\sigma,k}\) and driving field
being a linear combination of the mean fields of the other populations.
Networks of Kuramoto–Sakaguchi oscillators have been used as models for synchronization phenomena. In neuroscience, individual oscillators can represent neurons [83] or large numbers of neurons in neural masses [51, 84, 85]. In the framework of the model (4), the populations can be thought of as M neural masses. In contrast to models where neural masses only have a phase, here, the macroscopic state of each population (neural mass) is determined by an amplitude (the level of synchrony \(R_{\sigma}:= Z_{\sigma}\)) and an angle (the average phase \(\phi_{\sigma}:= \arg{Z_{\sigma}}\)).
Theta and Quadratic Integrate and Fire neurons
Theta neurons
The Theta neuron is the normal form of the saddlenodeoninvariantcircle (SNIC) or saddlenodeinfiniteperiod (SNIPER) bifurcation [86] as shown in Fig. 2: At the excitation threshold, a saddle and a node coalesce on an invariant circle (i.e., limit cycle of the neuron). Its state is described by the phase \(\theta\in \mathbb {T}\) on the invariant circle and we use the convention^{Footnote 9} that the neuron fires (it emits a spike) when the phase crosses \(\theta=\pi\) (Fig. 2). The Theta neuron is a valid description of the dynamics of any neuron model undergoing this bifurcation, in some parameter neighborhood of the bifurcation. The Theta neuron is also a canonical Type 1 neuron [87].
Consider a single population of Theta neurons (hence we drop the population index σ) whose phases evolve according to
where \(\eta_{k}\) is the excitability of neuron k sampled from a probability distribution, κ is the coupling strength, and I is an input current—this could result from external input (driving) or network interactions. A population of Theta neurons (7) is a sinusoidally coupled system of the form (1) with
The dependence of \(H, \omega\) on the excitability parameters \(\eta_{k}\) can be made explicit by writing \(\omega_{k} = \omega(\eta_{k})\), \(H(t) = H(t; \eta_{k})\). Thus, results for models of the form (1) will also apply to networks of Theta neurons.
The Theta neuron was introduced in 1986 [87] and has since then been widely used in neuroscience. We refer for example to [88, 89] for a general introduction and only list a few concrete applications here. For example, Monteforte and Wolf [90] used these neurons as canonical type I neuronal oscillators in their study of chaotic dynamics in large, sparse balanced networks. References [91, 92] considered spatially extended networks of Theta neurons and the authors were specifically interested in traveling waves of activity in these networks. More recently, other authors have used some of the techniques for dimensional reduction reviewed in the present paper to study infinite networks of Theta neurons [93, 94]. We will discuss these reduction methods in detail further below.
Problem 2
What different dynamics are possible in a single population of globally coupled Theta neurons with pulsatile coupling? What is the onset for firing of neurons? We will revisit this problem in Example 2 below.
Quadratic Integrate and Fire neurons
The Theta neuron model is closely related to the Quadratic Integrate and Fire (QIF) neuron model [95] whose state is given by a membrane voltage \(V\in(\infty, +\infty)\). More precisely, using the transformation \(V_{k}=\tan{(\theta_{k}/2)}\) the population of Theta neurons (7) becomes a population of QIF neurons, where the membrane voltage \(V_{k}\) of neuron k evolves according to
Here we use the rule that the neuron fires (it emits a spike) if its voltage reaches \(V_{k}(t^{})=+\infty\) and then the neuron is reset to \(V_{k}(t^{+})=\infty\).
QIF neurons have been widely used in neuroscientific modeling; see [88, 89] for a general introduction and [96–99] for a few examples in the literature where QIF neurons are employed. They have the simplicity of the more common leaky integrateandfire model in the sense of having only one state variable (the voltage), but are more realistic in the sense of actually producing spikes in the voltage trace \(V(t)\).
Problem 3
How does a network of neurons respond to a transient stimulus? Specifically, if this neuronal network is modeled by a heterogeneous network of alltoall coupled QIF neurons. This is a pertinent question, for example, if stimulation is used for therapeutic purposes such as in Deep Brain Stimulation. We will revisit this problem in Example 3 below.
Exact meanfield descriptions for sinusoidally coupled phase oscillators
In this section, we review how sinusoidally coupled phase oscillator networks (1) can be simplified using meanfield reductions. Under specific assumptions (detailed further below) we derive lowdimensional system of ordinary differential equations for macroscopic meanfield variables that describe the evolution of sinusoidally coupled phase oscillator networks (1) exactly. This is in contrast to reductions that are only approximate or only valid over short time scales. Thus, these reduction methods facilitate the analysis of the network dynamics: rather than looking at a complex, highdimensional network dynamical system (or its infinitedimensional meanfield limit) we can analyze simpler, lowdimensional equations. For example, for the infinitedimensional limit of the Kuramoto model, we obtain a closed system for the evolution of Z, a twodimensional system (since Z is complex). While the Kuramoto model is particularly simple, the methods apply for general driving fields \(H_{\sigma,k}\) that could contain delays or depend explicitly on time. We give concrete examples in Sect. 4 below, where we apply the reduction techniques.
Importantly, these meanfield reductions also apply to oscillator networks which are equivalent to (1). In particular, this applies to neural oscillators: The QIF neuron and the Theta neuron are equivalent as discussed above. Consequently, rather than assuming a model for a neural population (e.g., [51]), we actually obtain an exact description of interacting neural populations in terms of their macroscopic (meanfield) variables.
Ott–Antonsen reduction for the meanfield limit of nonidentical oscillators
The Ott–Antonsen reduction applies to the meanfield limit of populations of indistinguishable sinusoidally coupled phase oscillators (1). First, we first outline the basic steps to derive the equations and highlight the assumptions made along the way; this section contains mathematical details and may be omitted on first reading. We then summarize the Ott–Antonsen equations for the models described in the previous section.
*Derivation of the reduced equations
Consider the dynamics of the (meanfield) limit of (1) with infinitely many oscillators, \(N\to\infty \). Note that while the population index σ is seen as discrete in this paper, it is also possible to apply the reduction to continuous topologies of populations such as rings; cf. [100, 101]. To simplify the exposition, we consider the classical case where the intrinsic frequency is the random parameter, \(\omega_{\sigma,k}=\eta_{\sigma, k}\), and that the driving field is the same for all oscillators in any population, \(H_{\sigma,k} = H_{\sigma}\); for details on systems with explicit parameter dependence (such as Theta neurons) see [102, 103]. Hence, suppose that the intrinsic frequencies \(\omega_{\sigma,k}\) are randomly drawn from a distribution with density \(h_{\sigma}(\omega)\) on \(\mathbb {R}\). In the meanfield limit, the state of each population at time t is not given by a collection of oscillator phases, but rather by a probability density \(f_{\sigma}(\omega, \vartheta ; t)\) for an oscillator with intrinsic frequency \(\omega\in \mathbb {R}\) to have phase \(\vartheta \in \mathbb {T}\); see [104] for general properties of such distributions and statistics on the circle. For a set of phases \(B\subset \mathbb {T}\) the marginal \(\int_{B}\int_{\mathbb {R}}f_{\sigma}(\omega, \vartheta ; t)\,\textrm {d}\omega \,\textrm {d}\vartheta \) determines the fraction of oscillators whose phase is in B at time t. Moreover, we have \(\int_{\mathbb {T}}f_{\sigma}(\omega, \vartheta ; t)\,\textrm {d}\vartheta = h_{\sigma}(\omega)\) for all times t by our assumption that the intrinsic frequencies do not change over time.
Conservation of oscillators implies that the dynamics of the meanfield limit of (1) is given by the transport equation^{Footnote 10}
Because oscillators are conserved,^{Footnote 11} the change of the phase distribution over time is determined by the change of phases given by the velocity \(v_{\sigma}\) through (1) at time t of an oscillator with phase ϑ and intrinsic frequency ω. While the transport equation for the meanfield limit originally appears in Refs. [105, 106], it can be rigorously derived from a measuretheoretic perspective as a Vlasov limit [107].
Before we discuss how to find solutions for the transport equation (10), it is worth noting that it has been analyzed directly in the context of functional analysis for networks of Kuramoto oscillators. Stationary solutions of (10) and their stability have been studied recently in the context of alltoall coupled networks of Kuramoto oscillators [108–112]. Taking the meanfield limit for \(N\to\infty\) depends on the homogeneity of the network. For certain classes of structured networks—networks on convergent families of random where a limiting object (a graphon) can be defined as the number of nodes \(N\to \infty\)—it is possible to define and analyze the dynamics of the resulting continuum limit [113, 114].
Ott and Antonsen [115] showed that there exists a manifold of invariant probability densities for the transport equation (10). Specifically, if \(f_{\sigma}(\vartheta ,\omega;0)\) is on the manifold, so will the density \(f_{\sigma}(\vartheta ,\omega;t)\) for any time \(t\geq0\). Let
denote the Kuramoto order parameter (3) in the meanfield limit. We will see below that the evolution on the invariant manifold is now described by a simple ordinary differential equation for \(Z_{\sigma}\) for each population σ.
In the following we outline the key steps to derive a set of reduced equations and refer to [115–117] for further details. Let w̄ denote the complex conjugate of \(w\in \mathbb {C}\). Suppose that \(f_{\sigma}(\vartheta ,\omega;t)\) can be expanded in a Fourier series in the phase angle ϑ of the form
Here it is assumed that \(f^{+}_{\sigma}\) has an analytic continuation into the lower complex half plane \(\lbrace \operatorname {Im}(\omega )<0 \rbrace\) (and \(f^{}_{\sigma}:=\bar{f}^{+}_{\sigma}\) into \(\lbrace \operatorname {Im}(\omega )>0 \rbrace\)); even with this assumption we can solve a large class of problems, but it poses a restriction to a number of practical cases discussed in Sect. 3.3 below. Ott and Antonsen now imposed the ansatz that Fourier coefficients are powers of a single function\(a_{\sigma}(\omega,t)\),
If \(\vert a_{\sigma}(\omega,t) \vert <1\) this ansatz is equivalent to the Poisson kernel structure for the unit disk, \(f^{+}_{\sigma} = (a_{\sigma}e^{i\vartheta })/(1a_{\sigma}e^{i \vartheta })\). Substitution of (12) into (10) yields
Thus, the ansatz (13) reduces the integro partial differential equation (10) to a single ordinary differential equation in \(a_{\sigma}\) for each population σ. (More precisely, there is an infinite set of such equations, one for each ω with identical structure.) Finally, with (13) we obtain
which relates \(a_{\sigma}\) and the order parameter \(Z_{\sigma}\) in (11).
Assuming analyticity, this integral may be evaluated using the residue theorem of complex analysis.^{Footnote 12} These equations take a particularly simple form if the distribution of intrinsic frequencies \(h_{\sigma}(\omega)\) is Lorentzian with mean \(\hat {\omega }_{\sigma}\) and width \(\Delta_{\sigma}\), i.e.,
since \(h_{\sigma}(\omega)\) has two simple poles at \(\hat {\omega }_{\sigma}\pm i\Delta_{\sigma}\) and thus (15) gives \(Z_{\sigma}=\bar{a}_{\sigma}(\hat {\omega }_{\sigma}i\Delta_{\sigma},t)\) under the assumption \(a_{\sigma}(\omega,t)\to0\) as \(\operatorname {Im}(\omega )\to\infty\). As a result, we obtain the twodimensional differential equation—the Ott–Antonsen equations for a Lorentzian frequency distribution—for the order parameter in population σ,
We note that this reduction method also works for other frequency distributions \(h_{\sigma}\), as outlined in [117]. However, the resulting meanfield equation will not always be a single equation but could be a set of coupled equations. For example, for multimodal frequency distributions \(h_{\sigma}\) the Ott–Antonsen equations will have an equation for each mode; see [103, 118, 119] and the discussion below.
The derivation above only states that there exists an invariant manifold of densities \(f_{\sigma}\) for the transport equation (10). What happens to densities \(f_{\sigma}\) that are not on the manifold as time evolves? Under some assumptions on the distribution in intrinsic frequencies \(h_{\sigma}\), Ott and Antonsen also showed in [116] that there are densities \(f_{\sigma}\) that are attracted to the invariant manifold. In other words, the dynamics of the Ott–Antonsen equations capture the longterm dynamics of a wider range of initial phase distributions \(f_{\sigma}(\vartheta ,\omega ;0)\), whether they satisfy (13) initially or not. We discuss this in more detail below.
Ott–Antonsen equations for commonly used oscillator models
We now summarize the Ott–Antonsen equations (OA) for the commonly used oscillator models described in the Sect. 2. Here we focus on Lorentzian distributions of the intrinsic frequencies or excitabilities; for Ott–Antonsen equations for other parameter distributions such as normal or bimodal distributions see [115, 118].
The Kuramoto model
Consider the meanfield limit of the Kuramoto model (2) with a Lorentzian distribution of intrinsic frequencies. Recall that the driving field for the Kuramoto model was \(H(t)=KZ(t)\). Substituting this into (OA) we obtain Ott–Antonsen equations for the Kuramoto model
a twodimensional system of equations since Z is complexvalued.
Kuramoto–Sakaguchi equations
For the Kuramoto–Sakaguchi equations (4) the driving field is a weighted sum of the individual population order parameters (6). Assuming a Lorentzian distribution of intrinsic frequencies with mean \(\hat {\omega }_{\sigma}\) and width \(\Delta _{\sigma}\) for each population \(\sigma\in \lbrace1,\ldots ,M\rbrace\), we obtain from (OA) the Ott–Antonsen equations for coupled populations of Kuramoto–Sakaguchi oscillators,
In other words, the Ott–Antonsen equations are a 2Mdimensional system that describe the interactions of the order parameters \(Z_{\sigma}\).
Networks of Theta neurons
Consider a single population of Theta neurons with drive \(I(t)\) given by (7) with parameterdependent intrinsic frequencies and driving field (8); we omit the population index σ. Assume that the variations in excitability \(\eta_{k}\) are chosen from a Lorentzian distribution mean η̂ and width Δ. We obtain the Ott–Antonsen equations for the meanfield limit of a population of Theta neurons (8)
Note that in contrast to (18), this is not a closed set of equations yet as the exact form of the input current is still unspecified. We will close these equations in Sect. 4.2.1 below by writing I in terms of Z for different types of neural interactions.
The order parameter for the Theta neuron directly relates to quantities with a physical interpretation such as the average firing rate of the network. Integrating the phase distribution (12) over the excitability parameter η under assumption (13) we obtain the distribution of all phases,
where Z may be a function of time. This distribution can be used to determine the probability that a Theta neuron has phase θ. Since a Theta neuron fires when its phase crosses \(\theta=\pi\), the average firing rate \(r(t)\) of the network at time t is the flux through \(\theta=\pi\), i.e.,
Here we used that \(\dot{\theta}_{\theta=\pi}=2\) by (7), independent of θ. The same result is obtained from the firing rate equations of the QIF neuron as we explain in the next paragraph.
Ott–Antonsen reduction for equivalent networks
The meanfield reductions are also valid for systems that are equivalent to a network of sinusoidally coupled phase oscillators (1). As an example, we discussed the relationship between QIF and Theta neurons above via the transformation \(V=\tan{(\theta/2)}\), which carries over to the meanfield limit of infinitely many neurons where the Ott–Antonsen equations apply. More specifically, this transformation converts the distribution of phases (20) into a distribution
of voltages where \(Z = (1\bar{W})/(1+\bar{W})\) and \(W=X+iY\) and \(X,Y\in\mathbb{R}\). Equation (22) is called the Lorentzian ansatz in [102]. Importantly, the quantity W is obtained from a conformal transformation of the order parameter Z. This allows one to convert the Ott–Antonsen equations for the Theta neurons (19) to an equation for the mean field \(W=(1\bar{Z})/(1+\bar{Z})\), given by
which describes the QIF neurons. The advantage of this formulation is that both the real and imaginary parts of W have physical interpretations: \(Y(t)\) is the average voltage across the network and \(X(t)\) relates to the firing rate r of the population, i.e., the flux at \(V=\infty\), since \(\lim_{V\rightarrow\infty}\tilde{p}(V,t)\dot{V}(t) = X(t)/\pi=r\) [102].
Watanabe–Strogatz reduction for identical oscillators
Meanfield reductions are possible for both finite and infinite networks for populations of identical oscillators. These reductions are due to the high level of degeneracy in the system, i.e., there are many quantities that are conserved as time evolves. This degeneracy was first observed in the early 1990s for coupled Josephson junction arrays [120], which relate directly to Kuramoto’s model of coupled phase oscillators [121]. Watanabe and Strogatz [122, 123] were able to calculate the preserved quantities explicitly using a clever transformation of the phase variables, thereby reducing the Kuramoto model from the N oscillator phases to three timedependent (meanfield) variables together with \(N3\) constants of motion. In terms of mathematical theory, the degeneracy originates from restrictions imposed by the algebraic structure of the equations [124–126] which is still an area of active research [127, 128].
The Watanabe–Strogatz reduction applies for sinusoidally coupled phase oscillator populations where oscillators within populations are identical, i.e., all oscillators have the same intrinsic frequency, \(\omega_{\sigma,k} = \omega_{\sigma}\), and are driven by the same field \(H_{\sigma,k} = H_{\sigma}\). Indeed, Watanabe–Strogatz and Ott–Antonsen reductions have been shown to be intricately linked [125, 129] as we briefly discuss below. Here, we focus on finite networks for simplicity. In the following section we give the equations in generality and give some mathematical detail. Then, the equations are subsequently stated for the commonly used oscillator models discussed above.
*Constants of motion yield reduced equations
The dynamics of a finite population (1) with \(N>3\) identical oscillators can be described exactly in terms of three macroscopic (meanfield) variables [122, 123, 129, 130]: the bunch amplitude \(\rho_{\sigma}\), bunch phase \(\varPhi_{\sigma}\), and phase distribution variable \(\varPsi_{\sigma}\). Similar to the modulus and phase of the Kuramoto order parameter \(Z_{\sigma}=R_{\sigma}e^{i\phi_{\sigma}}\), the bunch amplitude \(\rho _{\sigma}\) and bunch phase \(\varPhi_{\sigma}\) characterize synchrony (or equivalently, the maximum of the phase distribution); while \((R_{\sigma}, \phi_{\sigma})\) and \((\rho_{\sigma},\varPhi_{\sigma})\) do not coincide in general, they do if the population is fully synchronized. The phase distribution variable \(\varPsi_{\sigma}\) determines the shift of individual oscillators with respect to \(\varPhi_{\sigma}\) as illustrated in Fig. 3.
For a population of sinusoidally coupled phase oscillators (1) with driving field \(H_{\sigma}=H_{\sigma}(t)\) the macroscopic variables evolve according to the Watanabe–Strogatz equations
Mathematically speaking, the reduction to three variables means that the phase space \(\mathbb {T}^{N}\) of (1) is foliated by 3dimensional leafs, each of which is determined by constants of motion, \(\psi^{(\sigma)}_{k}\), \(k=1, \dotsc, N\) (\(N3\) are independent). In other words, the choice of constants of motion determines a specific 3dimensional invariant subset on which the macroscopic variables evolve. The Watanabe–Strogatz equations arise from the properties of Riccati equations and the bunch variables are parameters of a family of Möbius transformations which determine the system’s dynamics; see [125–128] for more details on the mathematics behind these equations.
From a practical point of view, two things are needed to use the Watanabe–Strogatz equations (WS) to understand oscillator networks of the form (1). First, since the driving field H is often a function of the population order parameters \(Z_{\tau}\), \(\tau =1,\ldots, M\), we need to translate \(Z_{\sigma}\) into the bunch variables to get a closed set of equations. Write \(z_{\sigma}:=\rho_{\sigma}e^{i\varPhi_{\sigma}}\). As shown for example in [129], we have
Second, one needs to determine the constants of motion from the initial phases \(\theta_{\sigma,k}(0)\): A possible choice is to set \(\psi ^{(\sigma)}_{k}:=\theta_{\sigma,k}(0)\) and \(\rho_{\sigma}(0)=\varPhi _{\sigma}(0)=\varPsi_{\sigma}(0)=0\); see [123] for a detailed discussion and different way to choose initial conditions that avoids the singularity at \(\rho_{\sigma}=0\). Taken together, the dynamics of individual oscillators (1) are now determined by (WS) via (25) and vice versa.
The relationship (25) between the bunch variables and the order parameter also indicates how the Watanabe–Strogatz equations and the Ott–Antonsen equations are linked. Pikovsky and Rosenblum [130] showed that for constants of motion that are uniformly distributed on the circle, \(\psi^{(\sigma)}_{k} = 2\pi k/N\), we have \(\gamma_{\sigma}\to1\) as \(N\to\infty\). Consequently, \(Z_{\sigma}=z_{\sigma}\) for such a choice of constants of motion in the limit of infinitely many oscillators. For the Kuramoto model with \(H_{\sigma}= Z_{\sigma}\), Eqs. (WSa) and (WSb) depend on Ψ only through γ. Thus, for constant \(\gamma=1\) Eqs. (WSa) and (WSb) decouple from (WSc). These two equations are equivalent to the Ott–Antonsen equations (OA) in the meanfield limit for identical oscillators. To summarize, the dynamics of the meanfield limit for identical oscillators is given by the Watanabe–Strogatz equations together with a distribution of constants of motion. For the particular choice of a uniform distribution of constants of motion, the equations decouple and the effective dynamics are given by the Ott–Antonsen equations.
Watanabe–Strogatz equations for commonly used oscillator models
We now summarize the Watanabe–Strogatz equations (WS) for the commonly used oscillator models described in Sect. 2.
Kuramoto–Sakaguchi equations
For the Kuramoto–Sakaguchi model (4), the driving field H is a linear combination of the order parameters, \(H_{\sigma}= \sum_{\tau =1}^{M}c_{\sigma\tau} Z_{\tau}\). Assuming that the oscillators within each population are identical, \(\omega_{\sigma,k}=\omega _{\sigma}\), the dynamics are governed by the Watanabe–Strogatz equations for coupled Kuramoto–Sakaguchi populations,
Networks of Theta neurons
For a finite population of identical Theta neurons (8) with identical excitability η and input current \(I(t)\) the Watanabe–Strogatz equations for identical Theta neurons [131] evaluate to
Note that, as for the Ott–Antonsen reduction above, one still needs to close this system by writing I in terms of the bunch variables in (WS) and the constants of motion. This is not straightforward and requires a considerable amount of computations [131].
Reductions for equivalent networks
For a finite network of identical QIF neurons governed by (9) with \(\eta_{j}=\eta\) for all j, the transformation \(V=\tan {(\theta/2)}\) converts this network into a network of identical Theta neurons (7). Consequently, such a network will also be described by equations of the form (27a)–(27c). As mentioned above, in the limit \(N\rightarrow\infty\) and equally spaced constants of motion, Eq. (27c) will decouple from (27a) and (27b). In this case, writing \(z=\rho e^{i\varPhi}\) we find that z satisfies (19) or equivalently (23) (with \(\hat {\eta }=\eta\) and \(\Delta=0\)).
Limitations and challenges
Before we apply the meanfield reductions to particular oscillator networks in the next section, some (mathematical) comments on the limitations of these approaches are in order.
The main assumption behind the reduction methods is that network interactions are mediated by a coupling function with a single harmonic (of arbitrary order). There are explicit examples [132–134] that show that the reductions, as described above, become invalid. For example chaotic dynamics may occur where the reduction would yield an effective twodimensional phase space; we discuss this example below. This does not mean that the reductions break down completely, and there may still be some degeneracy in the system if the interaction is of a specific form; see [135] for a more detailed discussion. It remains a challenge to identify what part of the meanfield reduction (if any) remains valid for more general interaction functions and phase response curves.
The Ott–Antonsen reduction for the meanfield limit allows for the oscillators to be nonidentical. By contrast, the Watanabe–Strogatz reduction of finite networks requires oscillators to be identical. Neither of these approaches applies to finite networks of nonidentical oscillators, and understanding such networks remains a challenge. Direct numerical simulations to elucidate the dynamics of networks of N almost identical oscillators are challenging as one needs to integrate an almost integrable dynamical system.^{Footnote 13} There has also been some recent progress analyzing situations in which the Ott–Antonsen or Watanabe–Strogatz equations do not apply. First, a perturbation theory for the exact meanfield equations has been developed to elucidate the dynamics for systems that are close to sinusoidally coupled, for example if there are very weak higherharmonics in the interaction function [136]. Second, while not an exact representation of the dynamics, the collective coordinates approach by Gottwald and coworkers [76, 137, 138] has been instructive to gain insights into the dynamics of finite networks of nonidentical oscillators.
Finally, Ott and Antonsen showed that the manifold of oscillator densities \(f_{\sigma}\) on which the reduction holds is attracting [116]. Their method of proof has been shown to apply to a wider class of systems [103]. As pointed out by Mirollo [139] and later elaborated further [128], their proof is based on a strong smoothness assumption on the density \(f_{\sigma}\) which implies limitations to this approach. More precisely, to be able to evaluate contour integrals using the residue theorem, it is typically assumed that the integrand in (15), containing the intrinsic frequency distribution \(h_{\sigma}\) and the density \(f_{\sigma}\), is holomorphic. In particular, this assumption is only valid for distributions \(h_{\sigma}\) that allow for arbitrarily large (or small) intrinsic frequencies with nonzero probability: The identity theorem for holomorphic functions implies that \(h_{\sigma}(\omega)>0\) for all \(\omega\in \mathbb {R}\). Any distribution for which the intrinsic frequencies are bound to a finite interval—the intrinsic frequencies of any finite collection of oscillators will lie in a finite interval—are excluded.^{Footnote 14} Hence, while the manifold described by Ott and Antonsen attracts some class of oscillator densities, it is not clear how large this class actually is (it does not include δdistributions where all oscillators have the same phase). Put differently, it is important to explicitly characterize the space of densities in which the Ott–Antonsen manifold is attracting.
Dynamics of coupled oscillator networks
We now discuss global synchrony and synchrony patterns in phase oscillator networks, and highlight how the reductions presented in the previous section simplify their analysis. While we indicate along the way how most of these systems are relevant from the point of view of biology and neuroscience, we here take a predominantly dynamical systems perspective and highlight the applicability of, for example, bifurcation theory [61, 140]. We focus on a small number of coupled populations of oscillators, which can be seen as building blocks for larger models consisting of many coupled populations (e.g., regions of interest in a wholebrain model as discussed in Sect. 5 below).
Networks of Kuramototype oscillators
We first consider networks of Kuramoto–Sakaguchi and related Kuramototype oscillators. Despite their simplicity, they have found widespread application, for example in neuroscience, as outlined in Sect. 2.2, to understand synchronization phenomena. The network interactions of such oscillators depend on phase differences. Bifurcations may occur as one introduces an explicit phase dependency to the coupling [141] such as in the networks of Theta neurons which we discuss in the following section.
One oscillator population
Example 1
We first revisit Kuramoto’s original problem (see Problem 1 in Sect. 2.1 above) from the perspective of meanfield reductions: Given a globally coupled network of Kuramoto oscillators (2) with a Lorentzian distribution of intrinsic frequencies, what is the critical coupling strength \(K_{c}\) where oscillators start to synchronize?
This problem is surprisingly easy to solve in the meanfield limit \(N\to\infty\) using the Ott–Antonsen reduction. Assume that the distribution of intrinsic frequencies is a Lorentzian with mean ω̂ and width Δ. Recall that the order parameter Z evolves according to the Ott–Antonsen equation (17): Separating (17) for \(Z= Re^{i\phi }\) into real and imaginary parts yields
Moreover, the manifold on which (17) describes the meanfield limit of (2) attracts initial phase distributions. Since the equation for the mean phase ϕ is completely uncoupled, it suffices to analyze (28a). Thus, Kuramoto’s problem in the infinitedimensional meanfield limit reduces to solving the onedimensional real ordinary differential equation (28a): By elementary analysis, we find that the equilibrium \(R=0\) is stable for \(K< K_{c}=2\Delta\) and loses stability in a pitchfork bifurcation where the solution \(R =\sqrt{12\Delta/K}>0\) becomes stable. The same analysis applies to the Kuramoto–Sakaguchi network (4) with \(M=1\) for phase lag \(\alpha\in (\frac{\pi}{2}, \frac{\pi}{2} )\) with K replaced by \(K\cos(\alpha)\) in (17); note that for phase lag \(\sin{\alpha}\neq0\) we have \(\dot{\phi}=\hat {\omega }+K\sin{(\alpha)}R(1R^{2})\) so that the frequency now depends nontrivially on R.
Global synchronization of finite networks of identical Kuramoto–Sakaguchi oscillators is readily analyzed using the Watanabe–Strogatz reduction. As above, a phase variable decouples and we obtain a twodimensional system which describes the dynamics of (4) for \(M=1\). Its analysis [122] shows that the system will synchronize perfectly, \(R\to1\) as \(t\to\infty\), for \(\alpha\in (\frac {\pi}{2}, \frac{\pi}{2} )\) (attractive coupling) and converge to an incoherent equilibrium, \(R\to0\) as \(t\to\infty\), for \(\alpha\in (\frac{\pi}{2}, \frac{3\pi}{2} )\) (repulsive coupling). In the marginal case of \(\cos(\alpha) = 0\) the system is Hamiltonian [123].
Multimodal distributions in the Kuramoto model
While Kuramoto’s original model considered a single oscillator population with unimodally distributed frequencies—such as the Lorentzian distribution—Kuramoto also speculated on what dynamic behaviors a network consisting of a single population would exhibit if the distribution of natural frequencies was instead bimodal [80]: Depending on the coupling strength, the width and spacing of the peaks of the frequency distribution, oscillators may either aggregate and form a single crowd of oscillators, thus forming one “giant oscillator,” or disintegrate into two mutually unlocked crowds, corresponding to two giant oscillators.
Crawford analyzed this case rigorously for the weakly nonlinear behavior near the incoherent state using center manifold theory [142] and thus explained local bifurcations in the neighborhood of the incoherent state. Using the Ott–Antonsen reduction, Martens et al. [118] obtained exact results on all possible bifurcations and the bistability between incoherent, partially synchronized, and traveling wave solutions. Similarly, rather than superimposing two unimodal frequency distributions, Pazó and Montbrió [119] considered a modified model where the distribution of intrinsic frequencies is the difference of two Lorentzians; this allows for the central dip to become zero.^{Footnote 15}
Interestingly, to describe a single population with an mmodal frequency distribution using the Ott–Antonsen reduction, one obtains a set of m coupled ordinary differential equations. This set describes the oscillator dynamics of m order parameters (11) associated with each peak of the mmodes, resulting in collective behavior where oscillators either aggregate to a single or potentially up to m groups of oscillators. The question arises as to whether the resulting set of equations can be related to Mpopulation models as described by (4). This question was picked up by Pietras and Daffertshofer [143] who showed that the dynamical equations describing \(M=1\) population with a bimodal distribution can be mapped to \(M=2\) populations (4) with nonidentical coupling strengths \(K_{\sigma\tau}\) with equivalent bifurcations. However, this equivalence breaks down for \(M=3\) populations and trimodal distributions.
Higherorder and nonadditive interactions
Note that networks of Kuramoto–Sakaguchi oscillators (4) make two important assumptions on the network interactions. First, the interactions are sinusoidal, as discussed above, since the coupling function has a single harmonic. Second, the network interactions are additive [72, 144], that is, the interaction of two distinct oscillators on a third is given by the sum of the individual interactions. By contrast, coupling between oscillatory units generically contains nonlinear (nonadditive) interactions; concrete examples include oscillator networks [145], interactions in ecological networks [146], and nonlinear dendritic interactions between neurons [147–149]. For weakly coupled oscillator networks, higherorder interaction terms include higher harmonics in the coupling function as well as coupling terms which depend nonlinearly on three or more oscillator phases [150]. Such terms naturally arise in phase reductions: If the interaction between the nonlinear oscillators is generic, Ashwin and Rodrigues [151] calculated these corresponding higherorder interaction terms explicitly for a globally coupled network of symmetric oscillators close to a Hopf bifurcation. Moreover, higherorder interactions in the effective phase dynamics can also arise for additively coupled nonlinear oscillators [152], for example in higherorder phase reductions [153]. Nonadditive interactions can be exploited for applications, such as to build neurocomputers [154].
The meanfield reductions here can be used to analyze networks with particular type of higherorder interactions. For example, Skardal and Arenas [155] consider a single globally coupled population of indistinguishable oscillators where the pure triplet interactions of the form \(\sin(\theta_{l}+\theta_{j}2\theta_{k})\) determines the joint influence of oscillators j, l onto oscillator k. In the meanfield limit, they find multistability and hysteresis between incoherent and partially synchronized attractors. In general, however, higherorder interaction terms lead to phase oscillator networks where the meanfield reductions cease to apply [134].
Generalizations
Much progress has been made to understand synchronization and more complicated collective dynamics in globally coupled networks of Kuramoto oscillators and their generalizations; see [141, 156, 157] for surveys. While we discussed Kuramoto’s problem as an example, the same methods apply for more general types of driving fields H: They may include homogeneous [115] or heterogeneous delays [158, 159] (the latter one being of specific interest for coupled populations of neurons), they may be heterogeneous in terms of the contribution of individual oscillators [160], or they may include generalized mean fields [127]. However, note that much richer dynamics are possible when the assumptions of sinusoidal coupling breaks down. Because of the Poincaré–Bendixson theorem [61, 161], chaos is not possible for the meanfield reductions for \(M=1\) populations of Kuramoto–Sakaguchi oscillators since their effective dynamics is one or twodimensional, respectively. By contrast, even for fully symmetric networks, higher harmonics in the phase response curve/coupling function may lead to chaotic dynamics [132, 134].
Two oscillator populations
Two coupled populations of Kuramoto–Sakaguchi oscillators can give rise to a larger variety of synchrony patterns. Before considering general coupling between populations, we first discuss the widely investigated case of identical (and almost identical) populations of Kuramoto–Sakaguchi oscillators (4) with Lorentzian distribution of intrinsic frequencies. To be precise, we say that all populations of (4) are identical if for any two populations σ, τ, there is a permutation which sends σ to τ and leaves the corresponding equations (18) for the meanfield limit invariant. Intuitively speaking, this means we can swap any population with any other population without changing the dynamics. Mathematically speaking, the populations are identical if the Ott–Antonsen equations (18) have a permutational symmetry group that acts transitively [162]. Note that for the populations to be identical, the oscillators do not need to be identical. But if the populations are identical, then the frequency distributions \(h_{\sigma}\) are the same for all populations. Moreover, if the oscillators within each population have the same intrinsic frequency (as required for the Watanabe–Strogatz reduction) then all oscillators in the network have the same intrinsic frequency.
Oscillator networks which are organized into distinct populations support synchrony patterns which may be localized, that is, some populations show more (or less) synchrony than others. While this may not be surprising if the populations are nonidentical, such dynamics may also occur when the populations are identical. For identical populations of Kuramoto–Sakaguchi oscillators, the localized dynamics arise purely through the network interactions—the populations would behave identically if uncoupled—and hence constitute a form of dynamical symmetry breaking. The phenomenon of “coexisting coherence and incoherence” has been dubbed a chimera state in the literature [163] and has attracted a tremendous amount of attention in the last two decades; see [63–65] for recent reviews. To date, an entire zoo of chimeras and chimeralike creatures has emerged in a range of different networked dynamical systems—with attempts to classify and distinguish these creatures [164, 165]—beyond the original context of phase oscillators [166]. Here we will discuss chimeras only in coupled populations of Kuramoto–Sakaguchi oscillators (4) as examples of localized patterns of (phase and frequency) synchrony.
Synchrony patterns for two identical populations
The Ott–Antonsen reduction has been instrumental to understand the dynamics of networks consisting of \(M=2\) populations of Kuramoto–Sakaguchi oscillators. Assuming that all intrinsic frequencies are distributed according to a Lorentzian, we obtain two coupled Ott–Antonsen equations (18) for the limit of infinitely large populations. In this section we focus on networks of identical populations, that is, the distributions of intrinsic frequencies are the same and coupling is symmetric; cf. Fig. 4(a). This allows one to simplify the parametrization of the system by introducing selfcoupling\(c_{s} = k_{s} e^{i\alpha_{s}} := c_{11} = c_{22}\) and neighborcoupling\(c_{n} = k_{n} e^{i\alpha_{n}} := c_{12} = c_{21}\) parameters and the coupling strength disparity \(A=(k_{s}k_{n})/(k_{s}+k_{n})\). Writing \(Z_{\sigma}=R_{\sigma}e^{i\phi_{\sigma}}\) as above, the state of (18) is fully determined by the amount of synchrony in each population \(R_{1}\), \(R_{2}\) and the difference of the mean phase \(\psi :=\phi_{1}\phi_{2}\) of the two populations; cf. [167]. Naturally, such networks support three homogeneous synchronized states, a fully synchronized state \(\textrm {S}\textrm {S}_{0}= \lbrace(R_{1},R_{2},\psi)=(1,1,0) \rbrace\) where both populations are synchronized and in phase, a cluster state \(\textrm {S}\textrm {S}_{\pi}= \lbrace(R_{1},R_{2},\psi)=(1,1,\pi) \rbrace\) where both populations are synchronized, and in antiphase and a completely incoherent state \(\textrm {I}= \lbrace(R_{1},R_{2},\psi)=(0,0,*) \rbrace\). A bifurcation analysis shows that only one of the three is stable for any given choice of coupling parameters [167].
In addition to homogeneous synchronized states, networks of two identical populations also support synchronization patterns where synchrony is localized in one of the two populations, a chimera, as illustrated in Fig. 4(b). As discussed by Abrams et al. [168], for homogeneous phase lags (\(\alpha_{s}=\alpha_{n}\)) stable complete synchrony SS_{0} and a stable chimera in \(\textrm {D}\textrm {S}= \lbrace R_{1}<1, R_{2}=1 \rbrace\), which is either stationary or oscillatory, coexist.^{Footnote 16} Note that the Ott–Antonsen reduction simplifies the analysis tremendously: It translates the problem for large oscillator networks into a lowdimensional bifurcation problem. Martens et al. [169] outlined the basins of attraction of the coexisting stable synchrony patterns and thereby answering the question as to which (macroscopic or microscopic) initial conditions converge to either state. Through directed perturbations it is possible to switch between different synchrony patterns and thus functional configurations of the network that are of relevance in neuroscience [32, 170], thus embodying memory states or controlling the predominant direction of information flow between subpopulations of oscillators [33]. Further work addresses the robustness of chimeras against various inhomogeneities, including heterogeneous frequencies [100, 171], network heterogeneity [172], and additive noise [171].
If one allows for heterogeneous phaselag parameters, \(\alpha _{s}\neq\alpha_{n}\), a variety of other attractors with localized synchrony emerge [167, 173]. This includes in particular solutions in \(\textrm {D}\textrm {D}= \lbrace0< R_{1}< R_{2}, R_{2}<1 \rbrace\) where neither population is fully phase synchronized; cf. Fig. 4(b). This includes not only stationary or oscillatory solutions of the state variables, but also attractors where the order parameters \(Z_{1}\), \(Z_{2}\) fluctuate chaotically both in amplitude and with respect to their phase difference [174]. Finite networks with two populations of identical oscillators may be analyzed using the Watanabe–Strogatz equations (26a)–(26c). One finds that the bifurcation scenarios for the appearance of chimera states is similar to the dynamics observed for infinite populations [175]. Moreover, macroscopic chaos also appears in many finite networks [174] down to just two oscillators per population.
A note on finite networks of identical oscillators and localized frequency synchrony
For finite oscillator networks, the widely used intuitive definition of a chimera as a solution for networks of (almost) identical oscillators where “coherence and incoherence coexist” is difficult to apply in a mathematically rigorous way. Hence, Ashwin and Burylko [176] introduced the concept of a weak chimera which provides a mathematically testable definition of a chimera state in finite networks of identical oscillators; here, we only give an intuition and refer to [176, 177] for a precise definition. The main feature of a weak chimera is that identical oscillatory units (with the same intrinsic frequency if uncoupled) generate rhythms with two or more distinct frequencies solely through the network interactions—this is a fairly general form of synchronization. In the context of dynamical systems with symmetry [162], weak chimeras are, as outlined in [178], an example of dynamical symmetry breaking where identical elements have nonidentical dynamics since their frequencies are distinct.
More specifically, a weak chimera is characterized by localized frequency synchrony in a network of identical oscillators. Similar to the definition of identical populations further above, we say that the oscillators are identical if for a pair of oscillators \((\sigma,k)\) and \((\tau, j)\) there exists an invertible transformation of the oscillator indices which keeps the equations of motion invariant. In other words, all oscillators are effectively equivalent. Now \(\dot{\theta}_{\sigma,k}(t)\) is the instantaneous frequency of oscillator \((\sigma,k)\)—the change of phase at time t—and thus the asymptotic average frequency of oscillators \((\sigma,k)\) is
Rather than looking at phase synchrony (\(\theta_{\sigma,k}=\theta _{\tau, j}\)) of oscillators \((\sigma,k)\) and \((\tau, j)\), we say that the oscillators are frequency synchronized if \(\varOmega _{\sigma,k}=\varOmega_{\tau, j}\). Weak chimeras now show localized frequency synchrony, that is, all oscillators within one population have the same frequency \(\varOmega_{\sigma}=\varOmega _{\sigma,k}\) while there are at least two distinct populations \(\tau \neq\tau'\) that have different frequencies, \(\varOmega_{\tau}\neq \varOmega_{\tau'}\). Note that weak chimeras are impossible for a globally coupled network of identical phase oscillators (that is, there is only a single population \(M=1\)): Such a network structure forces frequency synchrony of all oscillators [176].
Weak chimeras have been shown to exist in a range of networks which consist of \(M=2\) interacting populations of phase oscillators. For weakly interacting populations of phase oscillators with general interaction functions there can be stable weak chimeras with quasiperiodic [176, 179] and chaotic dynamics [177]. However, neither weak interaction nor general coupling functions are necessary for dynamics with localized frequency to arise: Even sinusoidally coupled networks (4) of just \(N=2\) oscillators per population support stable regular [175] and chaotic [174] weak chimeras.
Dynamics of nonidentical populations with distinct frequency distributions
As mentioned above, chimera states appear for two identical populations of phase oscillators. Using the Ott–Antonsen equations, Laing showed that these dynamics persist for (4) with \(M=2\) if \(\Delta_{\sigma}>0\) and \(\omega_{\sigma}\neq\omega _{\sigma'}\) in the large N limit [100]; see also [180] for further bifurcation analysis. As heterogeneity is increased, stationary chimera states can become oscillatory through Hopf bifurcations and may eventually be entirely destroyed.
Montbrió et al. [181] studied two populations where not only frequencies were nonidentical (\(\Delta _{\sigma}>0\), \(\varOmega_{\sigma}\neq\varOmega_{\sigma'}\)), but also the coupling was asymmetric between the two populations. In another study, Laing et al. considered noncomplete networks to study the sensitivity of chimera states against gradual removal of random links starting from a complete network [172], and found that oscillations of chimera states can be either created or suppressed depending on the type of link removal.
Dynamics of nonidentical populations with asymmetric input or output
Another way to break symmetry in a population of Kuramoto oscillators is inspired by neural networks with excitatory and inhibitory coupling [56]: One replaces K with a random coefficient \(K_{j}\)inside the sum in (2). Thus, oscillators with \(K_{j} >0\) mimic the behavior of excitatory neurons while those with \(K_{j}<0\) correspond to inhibitory neurons. The interactions between oscillators j and l are not necessarily symmetric, unless \(K_{j} = K_{l}\). The study by Hong and Strogatz [182] reveals that—somewhat surprisingly—extending the Kuramoto model in this fashion yields dynamics that resembles that of the original model (2) when the intrinsic frequencies \(\omega_{k}\) are nonidentical. Similar coupling schemes accommodating for excitatory and inhibitory coupling have been devised for multipopulation models (5), to study how solitary states emerge within a synchronized population, thus leading to the formation of clusters [183].
Another possibility to include coupling heterogeneity considered by Hong and Strogatz is to introduce an oscillator dependent coupling parameter \(K_{k}\)outside of the sum in Eq. (2); see [184]. This relates to social behavior: An oscillator k is conformist if \(K_{k}>0\) (it wants to synchronize) and contrarian if \(K_{k}<0\). This setup may give rise to complex states where oscillators bunch up in groups with a phase difference of π or move like a traveling wave. A later study found that the system with identical oscillators harbors even more complex dynamics, such as incoherent and other states [185].
Three and more oscillator populations
Stable synchrony patterns for three identical populations
We first consider identical populations with reciprocal coupling in the sense that \(c_{\sigma\tau}=c_{\tau\sigma}\); see [186, 187]. Here the coupling is determined by selfcoupling \(k_{s}\) and phase lag \(\alpha _{s}\), as well as coupling strength and phase lag to the neighboring populations \(k_{n_{1}}\), \(k_{n_{2}}\), \(k_{n_{3}}\) and \(\alpha_{n_{1}}\), \(\alpha_{n_{2}}\), \(\alpha_{n_{3}}\). Reducing the phaseshift symmetry, the state of the system is determined by the magnitude of the order parameters, \(R_{\sigma}= \vert Z_{\sigma} \vert \) and the phase differences between the mean fields \(\psi_{1}=\phi_{2}\phi_{1}\) and \(\psi_{2}=\phi_{3}\phi_{1}\).
Networks of three populations support a variety of localized synchrony patterns. For coupling with a triangular symmetry, that is, \(k_{n_{1}}=k_{n_{2}}=k_{n_{3}}\leq k_{s}\) and \(\alpha_{n_{1}}=\alpha _{n_{2}}=\alpha_{n_{3}}=\alpha_{s}\), Martens [186] identified coexisting synchrony patterns: There are three stable solution branches, full phase synchrony \(\textrm {S}\textrm {S}\textrm {S}= \lbrace R_{1}=R_{2}=R_{3}=1 \rbrace\) as well as two chimeras in \(\textrm {S}\textrm {D}\textrm {S}= \lbrace R_{1}=R_{3}=1>R_{2} \rbrace\) and in \(\textrm {D}\textrm {S}\textrm {D}= \lbrace R_{1}=R_{3}< R_{2}=1 \rbrace\). The Ott–Antonsen reduction allows one to perform an explicit bifurcation analysis of the resulting planar system and shows bifurcations similar to networks with \(M=2\) populations. Remarkably, there are parameter values where \(\textrm {S}\textrm {S}\textrm {S}\) as well as the chimeras in \(\textrm {S}\textrm {D}\textrm {S}\), \(\textrm {D}\textrm {S}\textrm {D}\) are stable simultaneously; this gives rise to the possibility of switching between these three synchronization patterns through directed perturbations [169]. This triangular symmetry is broken in [187] by allowing \(k_{n_{2}}\neq k_{n_{1}}\). Thus, the coupling between populations 2 and 3 can be gradually reduced or increased until the network effectively becomes a chain of three populations or effectively two populations, respectively. A bifurcation analysis shows that the chimeras in \(\textrm {S}\textrm {D}\textrm {S}\) and \(\textrm {D}\textrm {S}\textrm {D}\) persist and provides stability boundaries.
Metastability and dynamics of localized synchrony for identical oscillators
The synchrony patterns above were primarily considered as attractors: For a range of initial phase configurations, the long term dynamics of the oscillator network will exhibit a particular synchrony pattern. While this may be a good approximation for large scale neural dynamics on a short timescale, the global dynamics of largescale brain neural networks are usually much more complicated [26]. Neural recordings show that particular dynamical states (of synchrony and activity) persist for some time before a rapid transition to another state [53, 188, 189]. One approach to model such dynamics is to assume that there are a number of metastable states (rather than attractors) in the network phase space which are connected dynamically by heteroclinic trajectories^{Footnote 17} [190]. If heteroclinic trajectories form a heteroclinic network^{Footnote 18}—the nodes of this network are dynamical states, links are connecting heteroclinic trajectories—the system can exhibit sequential switching dynamics: The state will stay close to one metastable state before a rapid transition, or switch, to the next dynamical state. Heteroclinic networks have long been subject to investigations, both theoretically [191] and with respect to applications in neuroscience [43]; one possible modeling approach is to write down kinetic (Lotka–Volterra type) equations for interacting macroscopic activity patterns [192, 193] which support heteroclinic networks.
Heteroclinic dynamics also arise in phase oscillator networks. For globally coupled oscillator networks, i.e., \(M=1\) population, there are heteroclinic networks between patterns of phase synchrony [194, 195]. As mentioned above, all oscillators in these networks are necessarily frequency synchronized, that is, they show the same rate of activity. More recently, it was shown that more general network interactions than those in (4) allow for heteroclinic switching between weak chimeras as states with localized frequency synchrony [196]: Each population will sequentially switch between states with high activity (frequency) to a state with low activity. One of the simplest phase oscillator networks which exhibits such dynamics consists of \(M=3\) populations of \(N=2\) oscillators where \(K>0\) mediates the coupling strength between populations. More precisely, the dynamics of oscillator \((\sigma, k)\) are given by
Here the interactions within each population is not just given by a first harmonic as in (4) but also by a second harmonic (scaled by a parameter r); this is sometimes referred to as Hansel–Mato–Meunier coupling [194]. Moreover, the interactions between populations are not additive but consist of nonlinear functions of four phase variables; this is a concrete example of higherorder interaction terms discussed above. It remains an open question whether such generalized interactions are necessary to generate heteroclinic dynamics between weak chimeras.
Dynamics of metastable states with localized (frequency) synchrony are of interest also in larger networks of \(M>3\) populations. Since explicit analytical results are hard to get for such networks, Shanahan [197] used numerical measures to analyze how metastable and “chimeralike” the network dynamics are. Recall that \(R_{\sigma}(t)\) encodes the level of synchrony of population σ at time t. Let \(\langle\cdot \rangle_{\sigma}\), \(\operatorname {Var}_{\sigma}\) denote the mean and variance over all populations \(\sigma=1, \dotsc, M\) and \(\langle\cdot \rangle_{T}\), \(\operatorname {Var}_{T}\) mean and variance over the time interval \([0, T]\). Now
gives how much synchrony of individual populations vary over time while
encodes how much synchrony varies across populations. Intuitively, large values of λ correspond to a high level of “metastability” while large values of χ indicate that the dynamics are “chimeralike”. On the one hand, these measures have subsequently been applied to more general oscillator networks [198, 199]. On the other hand, they have been applied to study the effect of changes to the network structure (for example through lesions) to the dynamics of Kuramoto–Sakaguchi oscillators (4) with delay on human connectome data [200].
Populations with distinct intrinsic frequencies
Meanfield reductions have also been successful at describing networks of nonidentical populations with distinct mean intrinsic frequencies. Examples of such a setup include interacting neuron populations in the brain with distinct characteristic rhythms. Resonances between the mean intrinsic frequencies give rise to higherorder interactions. Subject to certain conditions, one can apply the Ott–Antonsen reduction for the meanfield limit [201] or the Watanabe–Strogatz reduction for finite networks [202] to understand the collective dynamics. If resonances between the mean intrinsic frequencies of the populations are absent [203], then the meanfield limit equations (OA)—a system with 2M real dimensions—simplify even further. More specifically, assume that the intrinsic frequencies are distributed according to a Lorentzian distribution with width \(\Delta_{\sigma}\) and write \(Z_{\sigma}= R_{\sigma}e^{i\phi_{\sigma}}\) for the Kuramoto order parameter as above. As outlined in [203], nonresonant interactions imply that—as in (28a)—the equations for \(R_{\sigma}\) in (OA) decouple from the dynamics of the mean phases \(\phi_{\sigma}\). That is, the macroscopic dynamics are described by the Mdimensional system of equations
where \(a_{\sigma}, b_{\sigma\tau}, c_{\sigma\tau}\in \mathbb {R}\) are parameters which depend on the underlying nonlinear oscillator system. Note that these equations of motion are similar to Lotka–Volterra type dynamical systems which have been used to model sequential dynamics in neuroscience [192, 193]. Indeed, (31) give rise to a range of dynamical behavior including sequential localized synchronization and desynchronization through cycles of heteroclinic trajectories and chaotic dynamics [203].
Networks of neuronal oscillators
Neurons can be modeled at different levels of realism and complexity [204]. The approach we (and many others) take is to ignore the spatial extent of individual neurons (including dendrites, soma, and axons) and treat each neuron as a single point whose state is described by a small number of variables such as intracellular voltage and the concentrations of certain ions. We also ignore stochastic effects and describe the dynamics of single neurons by a small number of ordinary differential equations. By definition, the state of a Theta neuron or a QIF neuron is described by a phase variable. However, under the assumption of weak coupling, higherdimensional models with a stable limit cycle (e.g., Hodgkin–Huxley, FitzHugh–Nagumo) can be reduced to a phase description using phase reduction [43, 44].
The two main types of coupling between neurons are through synapses or gap junctions. In synaptic coupling, the firing of a presynaptic neuron causes a change in the membrane conductance of the postsynaptic neuron, mediated by the release of neurotransmitters. This has the effect of causing a current to flow into the postsynaptic neuron, the current being of the form
where \(V^{\text{rev}}\) is the reversal potential for that synapse, V is the voltage of the postsynaptic neuron, and \(\mathrm {g}(t)\) is the timedependent conductance. The sign of \(V^{\text{rev}}\) relative to the resting potential of the postsynaptic neuron governs whether the synapse is excitatory or inhibitory. The function \(\mathrm {g}(t)\) may be stereotypical, i.e., it may have the same functional form for each firing of the presynaptic neuron, where t is measured from the last firing, or it may have its own dynamics. One approximation in this type of modeling is to ignore the value of V in (32) and just assume that the firing of a presynaptic neuron causes a pulse of current to be injected into the postsynaptic neuron(s).
In gap junctional coupling a current flows that is proportional to voltage differences, so if neurons k and j have voltages \(V_{k}\) and \(V_{j}\), respectively, and g is the (constant) gap junction conductance, the current flowing from neuron k to neuron j is \(I=\mathrm {g}(V_{k}V_{j})\).
Populations of Theta neurons
In this section, we consider a population of Theta neurons (7) where the network interactions are generated by the input from all other neurons in the network. For input through synapses, for example, each neuron receives signals from the rest of the network through the input current I. Here, we will focus on the Ott–Antonsen reduction for Theta neurons (19) in the meanfield limit, assuming that variations in excitability are distributed according to a Lorentzian. The key ingredient here is to write the network input in terms of the meanfield variables to obtain a closed system of meanfield equations; as we will see below, this is possible for a range of couplings that are relevant for neural dynamics. For now, we focus on one population and omit the population index σ.
In the following, we consider a network where each neuron emits a pulselike signal of the form
as it fires (the phase θ increases through π, see Figs. 5 and 2). The parameter \(n\in \mathbb {N}\) determines the sharpness of a pulse and \(a_{n} = 2^{n} (n!)^{2}/(2n)!\) is the normalization constant such that \(\int_{0}^{2\pi }P_{n}(\theta)\,\textrm {d}\theta=2\pi\); cf. Fig. 5. The average output of all neurons in the network, each one contributing identically, is
Now \(P^{(n)}\) can be expressed as a function of the order parameter Z: As shown in [93, 205, 206] we have for the meanfield limit of infinitely many neurons, \(N\to \infty\),
with coefficients
Here \(\delta_{p,q}=1\) if \(p=q\) and \(\delta_{p,q}=0\) otherwise. In the limit of infinitely narrow pulses, \(n\rightarrow\infty\), we find
Synaptic coupling
If each Theta neuron (7) receives instantaneous synaptic input in the form of current pulses as in [93, 94, 205], the input current to each neuron is the network output
A positive coupling strength \(\kappa>0\) for the Theta neuron (7) corresponds to excitatory coupling and \(\kappa<0\) to inhibitory coupling. Note that since I now is a function of the Kuramoto order parameter by (35), we have closed the Ott–Antonsen equation for the Theta neuron (19) to obtain a system describing the dynamics for infinitely many oscillators.
Example 2
The challenge in Problem 2 was to classify what dynamics are possible in a single population of globally coupled Theta neurons with pulsatile coupling and specify the onset where neurons start to fire. The dynamical repertoire of a population of Theta neurons can be understood using the Ott–Antonsen equations for the limit \(N\rightarrow\infty\). We follow the work by Luke et al. [93] who considered a network with pulsatile coupling (33) with nontrivial width \(n=2\) and direct synaptic coupling. According to (35) the pulse shape evaluates to
as a function of Z. With direct synaptic coupling \(I=\kappa P^{(n)}\), the Ott–Antonsen equations (19) for an infinitely large population is thus given by
This closed, twodimensional set of equations can readily be analyzed using dynamical systems methods.
Different dynamic behaviors may be observed for (19) while varying up to three parameters: the coupling strength κ, excitability threshold η̂, and the width of their distribution Δ. Luke et al. [93] found three distinct stable dynamical regimes: (i) partially synchronous rest, (ii) partially synchronous spiking, and (iii) collective periodic wave dynamics. In partially synchronous rest, most neurons remain quiescent (a stable node in the twodimensional Ott–Antonsen equations (19) for Z); in the partially synchronous spiking regime most neurons spike continuously (a stable focus for Z); and in the collective periodic wave neurons fire periodically (a stable periodic orbit of the order parameter Z). Varying κ from small to large values, we typically observe a transition from partially synchronous spiking (quiescence) to partially synchronous spiking, with growing synchrony as κ increases. This transition is characterized by hysteresis arising around two fold bifurcations, originating in a cusp bifurcation. For certain parameter values, the order parameter may undergo a Hopf bifurcation from partially synchronous spiking to collective periodic wave dynamics so that \(Z(t)\) becomes oscillatory.
In a neuroscientific context, knowledge of macroscopic quantities other than the order parameter Z is sometimes preferable, such as the firing rate given via (21) as \(r=\frac{1}{\pi} \operatorname {Re}((1\bar{Z})/(1+\bar{Z}) )\). Alternatively, as outlined in Sect. 3.1.3, the macroscopic equation (23) for \(W(t)\) is equivalent to (19) via a conformal transformation and describes the evolution of the population’s firing rate \(r=\frac{1}{\pi} \operatorname {Re}(W )\) and average voltage \(V=\operatorname {Im}(W )\). The algebraic solution for stationary states is particularly simple if one chooses infinitely narrow pulse shape (\(n\rightarrow\infty\)); however, note that this choice may be biophysically less realistic [93] and renders more degenerate dynamic behavior, e.g., bifurcations giving rise to oscillations in the order parameter (firing rate) disappear in this particular network with \(M=1\) population.
Finally, we note that if in addition the excitability of neurons varies periodically, more complicated dynamics and macroscopic chaos can be observed [205]. While this example covers networks of Theta neurons, the same approach applies to networks with QIF neurons with direct synaptic coupling as given by (38); see, for example, the analyses in [207–209].
A simple modification of (7) is to add synaptic dynamics by letting the input current I satisfy the equation
where \(\tau _{\mathrm {syn}}\) is the timeconstant governing the synaptic dynamics. In the limit \(\tau _{\mathrm {syn}}\to0\) the synaptic dynamics are instantaneous and we recover the previous model. Again, with (35) the Ott–Antonsen equations (19) and (39) form a closed system of equations that describe the dynamics in the meanfield limit.
Gap junctions
Along with synaptic coupling, the other major form of coupling between neurons is via gap junctions [210], in which a current flows between connected neurons proportional to the difference in their voltages. Using the equivalence of the Theta and QIF neuron, it was shown in [211] that adding alltoall gap junction coupling to (7) results in the equations
where \(\kappa^{\text{GJ}}\) is the strength of gap junction coupling and the function \(\mathrm {tn}(\theta):=\sin{\theta}/ (1+\cos{\theta }+\epsilon)\) with \(0<\epsilon\ll1\) stems from the coordinate transformation between Theta and QIF neurons. Note that (40) is still a sinusoidally coupled system. Assuming a Lorentzian distribution of excitability \(\eta_{k}\) centered at η̂ with width Δ, the dynamics in the limit of infinitely many oscillators are given by the Ott–Antonsen equation,
where
and \(\rho=\sqrt{2\epsilon+\epsilon^{2}}1\epsilon\). Note that the input current is still to be defined: There could be gap junction only coupling, \(I=0\), instantaneous synaptic input (38) or synaptic dynamics (39) as defined above.
The reduced equations allow one, for example, to study what effect the strength of the gap junction coupling has on the dynamics. Laing [211] found that for excitatory synaptic coupling (i.e., \(\kappa >0\)) increasing the strength of gap junction coupling could induce oscillations in the mean field via a Hopf bifurcation, and destroy previously existing bistability between steady states with high and low mean firing rates. For inhibitory synaptic coupling (i.e., \(\kappa<0\)) increasing the strength of gap junction coupling stabilized a steady state with high mean firing rate, inducing bistability in the network. In spatially extended systems, it was found that gap junction coupling could destabilize “bump” states via a Hopf bifurcation, and create traveling waves of activity.
Note that in recent work [212] the authors showed that one can take the limit \(\epsilon\rightarrow0\) in the above derivation, thus simplifying the analysis and allowing one to treat synaptic and gap junctional coupling (in an infinite network of QIF neurons) on equal footing.
Conductance dynamics
The above models for Theta neurons have all assumed that synaptic coupling is via the injection of current pulses. However, Ref. [60] considers a model in which synaptic input was in the form of a current, equal to the product of a conductance and the difference between the voltage of a QIF neuron and a reversal potential \(V^{\text{rev}}\). Converting to Theta neuron variables, a particular case of their model can be written as
with a timedependent gating function
that depends on the network output modulated by the coupling strength \(\kappa^{\mathrm {g}}>0\). (Note that quantities like g and \(V^{\text{rev}}\) have been nondimensionalized by scaling relative to dimensional quantities.) The corresponding Ott–Antonsen equations read
which is closed since \(\mathrm {g}(t)\) is a function of Z by (37).
The dynamics of this network are straightforward and as expected: For inhibitory coupling (\(V^{\text{rev}}<0\)) there is one stable fixed point for all η̂ while for excitatory coupling (\(V^{\text{rev}}>0\)) there can be a range of negative η̂ values for which the network is bistable between steady states with high and low average firing rates. This bistability in an excitatorially selfcoupled network is of interest as such a network can be thought of as a onebit “memory”, stably storing one of two states.
Populations of Winfree oscillators
The state of a Winfree oscillator [79] is also described by a single angular variable. The Winfree model predates the Kuramoto model and mimics the behavior of biological systems such as flashing fireflies or circadian rhythms in Drosophila [213]. In general, the Winfree model does not exhibit sinusoidal coupling. But under suitable assumptions, a network of Winfree oscillators is amenable to simplification through the Ott–Antonsen reduction [214]. Consider a network of N Winfree phase oscillators which evolve according to
for \(k=1, \dotsc, N\) and 2πperiodic functions Q and P̂. The function Q is the phase response curve of an oscillator, which can be measured experimentally or determined from a model neuron [215]. If we set \(Q(\theta)=\sin{\beta }\sin{(\theta+\beta)}\) with parameter β then we have a sinusoidally coupled phase oscillator network. Moreover, suppose that network interaction is given by a pulsatile function \(\hat{P}(\theta )=P_{n}(\theta\pi)\). While P̂ has its maximum at \(\theta=0\) (unlike the interactions for the Theta neuron), it can be expanded in a similar way as (35) into powers of the Kuramoto order parameter. Assuming that the intrinsic frequencies are distributed as a Lorentzian, we obtain an Ott–Antonsen equation that describes the dynamics in the limit of infinitely large networks; see [214] for details.
Several groups have used this description to study the dynamics of infinite networks of Winfree oscillators. Pazó and Montbrió [214] found that such a network typically has either an asynchronous state (constant mean field) or a synchronous state (periodic oscillations in the mean field, indicating partial synchrony within the network) as attractors. They also found that varying n (the sharpness of \(P_{n}\)) had a significant effect on the synchronizability of the network. Laing [206] studied a spatiallyextended network of Winfree oscillators and found a variety of stationary, traveling, and chaotic spatiotemporal patterns. Finally, Gallego et al. [216] extended the work in [214], considering a variety of types of pulsatile functions and phase response curves.
Coupled populations of neurons
While the previous sections discussed a network consisting of a single population of alltoall coupled model neurons, an obvious generalization is to consider networks of two or more populations. Consider M populations of Theta neurons and let \(P^{(n)}_{\tau}\) denote the output of population τ. For example for synaptic interaction amongst populations, (38) generalizes to
where \(\kappa_{\sigma\tau}\) is the input strength from population τ to population σ. Writing each \(P_{\tau}\) in terms of the order parameter \(Z_{\tau}\) of population τ, we obtain a closed set of M Ott–Antonsen equations (19) that describe the dynamics for infinitely large populations.
Interacting populations of neural oscillators give rise to neural rhythms. Laing [206] considered a network of two coupled populations of Theta neurons, one inhibitory and one excitatory. Such networks support a periodic PING rhythm [56] in which the activity of both populations is periodic, with the peak activity of the excitatory population activity preceding that of the inhibitory one. Analyses of similar types of networks were performed in [52, 60, 102]. Periodic behavior of the meanfield equations of coupled populations of Theta neurons (or equivalently QIF neurons) allows one to extract macroscopic phase response curves [217] which allows one to treat such ensembles as single oscillatory units in weakly coupled networks.
Coupled populations of Winfree oscillators support a range of dynamics. In Ref. [214] the authors considered a symmetric pair of networks of Winfree oscillators. They observed a variety of dynamics such a quasiperiodic chimera state in which one population is perfectly synchronous while the order parameter of the other undergoes quasiperiodic oscillations. They also found a chaotic chimera state where one population is phase synchronized while the order parameter of the other one fluctuates chaotically.
Further generalizations
The oscillator populations considered above do not have any sense of space themselves, apart from possibly two networks being at different points in space. The brain is threedimensional, although the presence of layered structures could lend itself to a description in terms of a series of coupled twodimensional domains. Regardless, the spatial aspects of neural dynamics should not be ignored. Several authors have generalized the techniques discussed above to spatial domains, deriving neural field models: spatiotemporal evolution equations for macroscopic quantities [94, 206, 211, 218–221]. The main advantage of using this new generation of neural field models is that unlike classical models [58, 59], the derivations from networks of Theta neurons are exact rather than heuristic. Rather than considering neural field models on continuous spatial domains, one could consider them on a discretized network where each node is a brain region and coupling strength are given, for example, by connectome data. We will briefly touch upon these approaches in Sect. 5 below.
All of the networks above have been alltoall coupled which is rarely the case in realworld systems. The indegree of a neuron is the number of neurons connecting to it, whereas the outdegree is the number of neurons to which it connects. For alltoall coupled networks all neurons have the same in and outdegree (\(N1\) for a network of N neurons with no selfcoupling). Several groups have considered networks in which the degrees are distributed, having a power law distribution, for example [172, 222–224]. The meanfield reduction techniques discussed above can be used to accurately and efficiently investigate the influence of this aspect of network structure on dynamics, and this is of great interest.
Networks of identical oscillators (whether finite or infinite) are described by the Watanabe–Strogatz equations. While the application to Kuramototype oscillator networks is fairly standard, the corresponding meanfield equations for Theta neurons (27a)–(27c) have only recently been analyzed.
Applications to neural modeling
The meanfield reductions and their applications to populations of neural units—as nextgeneration neural mass models—can give new modeling approaches to understand the dynamics of largescale neural networks. In the previous section, we took a descriptive dynamical systems perspective to understand the asymptotic dynamics and their bifurcations. We now change the perspective to elucidate how the meanfield reductions can give new insights into neural network dynamics.
Dynamics of neural circuits and populations
Example 3
How does a heterogeneous network of alltoall coupled QIF neurons react to a transient stimulus (see Problem 3 above)? To answer this question using exact meanfield reductions, we analyze a situation similar to that studied by Montbrió et al. [102]: Consider a network of QIF neurons (9) with dynamics governed by
for \(k=1,\dotsc, N\) with the rule that when the voltage \(V_{k}=\infty\), it is reset to \(V_{k}=\infty\). The \(I_{k}\) are chosen from a Lorentzian distribution with mean η̂ and width parameter Δ, and neurons are coupled alltoall with coupling strength κ. The mean firing rate is given by
representing the average neural activity in the past, i.e., \(t_{j}^{\ell}\) is the firing time of the jth neuron and ℓ is summed only over past firing times, \(t_{j}^{\ell}< t\). The input current \(s(t)\) will be specified below. Letting \(N\rightarrow\infty\) the network is described by (23), which in the present case becomes
where \(r=\operatorname {Re}(W )/\pi\). Suppose we set \(\Delta =0.1\), \(\hat {\eta }=0.5\) and \(\kappa=5\). Having \(\hat {\eta }<0\) means that in the absence of coupling most neurons will be excitable rather than firing, and \(\kappa>0\) models excitatory coupling.
We set the transient input to be \(s(t)=0.3\) for \(50< t<150\) and \(s(t)=0\) otherwise. The mean firing rate and voltage (i.e., averages over the ensemble of all neurons) of the network are shown in Fig. 6 (top and bottom, respectively) for both the network (48) and the mean field description (49). For these parameters the network is bistable: After the input current is removed, the network settles into an active state rather than returning to the quiescent state that it was in before stimulation. The agreement between the two descriptions is excellent, but the meanfield description is obviously much easier to numerically integrate and is also amenable to bifurcation analysis, as for example shown in [102].
The influence of oscillatory drive on network dynamics related to cognitive processing in simple working memory and memory recall tasks was studied by Schmidt et al. [225] in coupled populations of inhibitory and excitatory QIF neurons. The authors use the exact meanfield reductions reviewed here to elucidate how oscillatory input frequency stimulates the intrinsic dynamics in networks of recurrently coupled spiking neurons to change memory states. They find that slow delta and theta band oscillations are effective in activating network states associated with memory recall, while faster beta oscillations can serve to clear memory states via resonance mechanisms.
Balanced sparse networks of inhibitory QIF neurons were studied by di Volo and Torcini [226] to explain the onset of selfsustained collective oscillations via reduction to meanfield dynamics. This is achieved by applying the mean field reductions to sparse networks with diverging coupling strength, an approximation which works surprisingly well as their bifurcation diagrams show the onset of collective oscillations. The application of the meanfield reductions to sparse networks is ad hoc and further mathematical insights to define a welldefined limit would be desirable.
The work by Dumont and Gutkin [227] used exact phase reductions to identify the biophysical neuronal and synaptic properties that are responsible for macroscopic dynamics, such as the interneuronal gamma (ING) and the pyramidalinterneuronal gamma (PING) population rhythms. The key ingredient is the phase response curve of oscillatory macroscopic behavior of two coupled populations of QIF neurons [217], one excitatory and one inhibitory, as mentioned above. Assuming weak coupling between two sets of two populations (i.e., four populations total) the authors extracted phase locking patterns of the coupled multipopulation model.
A number of other studies have employed meanfield reductions for populations of QIF neurons to elucidate how microscopic neural properties affect the macroscopic dynamics [228, 229]. This includes insights into networks of heterogeneous QIF neurons with time delayed, alltoall synaptic coupling [230, 231], or two such networks [232]. Moreover, the meanfield reductions are also useful to analyze spatially extended networks of both Theta and QIF neurons, where localized patterns—such as bump states—can occur; cf. [94, 220].
Largescale neural dynamics
The theory above is particularly pertinent for the study of mesoscopic or macroscopic brain dynamics, i.e., dynamics arising from tissue that contains large populations of neurons. Such dynamics are recorded using a variety of different modalities in animal or human studies, including local field potentials (LFP) and magneto or electroencephapholographic (MEG/EEG) recordings [19]. These recording modalities pick up changes in dynamics that arise in conjunction with fluctuations in populations of neurons. Thus, when recordings are taken from multiple sensors in different positions simultaneously, one can map the spatiotemporal dynamics of large regions of the brain. The inclusion of multiple sensors yields a natural way to construct a largescale network representation of the dynamics of the brain, in which sensors are nodes of the network. Alternatively, dynamics can be attributed to distributed regions of interest within the brain, for example using approaches to solve the inverse problem and thereby reconstruct a network in source space [233, 234].
Having defined nodes, to determine interactions [235] there are several ways to define the edges of largescale brain networks; in a general context this inverse problem is known as network reconstruction [236]. Broadly speaking, edges of brain networks can be characterized as either functional, structural, or effective connections [19, 237]. In the former, a measure of statistical interrelation is used to quantify the extent that the dynamics of nodes coevolve (see, for example, [238]), with edges linking pairs of nodes that are highly correlated being assigned large weights. Structural connectivity, on the other hand, describes a means to define edges on anatomical grounds, for example via tracing of axonal tracts [239]. Finally, edges in effective connectivity networks are defined as connection strengths in explicit dynamic models that are tuned such that dynamic recordings are well explained by the model [240].
These different ways of representing the brain in terms of networks yield several avenues for investigation that are relevant to the discussion above. Specifically, network analyses have provided insight into the mechanisms of both function and dysfunction [18, 31, 241, 242], and modeling frameworks such as those described above are required in order to explain findings and develop testable predictions [243]. A particularly pertinent challenge is to understand to what extent structural connectivity—the structural property of the network—shapes emergent functional connectivity—properties of the dynamics—in both healthy and disease conditions [16, 17, 19, 244–247]
Functional connectivity has been shown to be altered in myriad disorders of the brain, including epilepsy and Alzheimer’s disease [241, 248–251]. It is therefore becoming an important marker for brain disorders, as well as a potentially important means of understanding disease and designing therapy [18, 21]. However, in order to link different data modalities and to develop effective and efficient treatment, it is crucial to understand why specific changes in dynamics occur. The reduction methods described herein could help in this direction by bridging fundamental properties of neurons into emergent properties of neuronal networks, which can then be coupled to build an understanding of mesoscopic or wholebrain dynamics [3].
We conclude this section with two very recent examples how the meanfield reductions used here have been used to understand the dynamics of macroscopic brain activities from experimental data. First, Weerasinghe et al. [252] employed the Kuramoto model and its meanfield reduction to develop new closedloop approaches for deep brain stimulation to improve treat patients with essential tremor and Parkinson’s disease. Specifically, the Ott–Antonsen equations yield expressions for the meanfield response of an oscillator population, which can be compared with experimentally measured response curves obtained from patients [253]. The idea is that such a modelsupported approach eventually yields efficient treatment strategies, for example, by stimulating at the optimal phase and amplitude to maximize efficacy and minimize side effects. Second, Byrne et al. [254] recently developed a novel brain model based on coupled populations of QIF neurons and use it in a number of neurobiological contexts, such as providing an understanding of the changes in powerspectra observed in EEG/MEG neuroimaging studies of motorcortex during movement. Such a model is the first step to bridge the microscopic properties of individual neurons to macroscopic brain dynamics.
Conclusions and open problems
The meanfield descriptions presented in this review are able to bridge spatial scales in coupled oscillator networks since they provide explicit descriptions of the macroscopic dynamics in terms of microscopic quantities. This provides insights into how network coupling properties (for example, a neural connectome) relate to dynamical properties (and thus functional properties) of an oscillator network. Importantly, the equations are not just a black box, but tools from dynamical systems theory that allow us to study explicitly how the dynamics change as network parameters are varied. We conclude by highlighting three sets of challenges for future research.
The first set of challenges relates to the reductions themselves and the mathematics behind them; some of them were already discussed in Sect. 3.3, and further along the way. Phase oscillator networks that arise through phase reduction typically have nonsinusoidal coupling due to higher harmonics and nonadditive terms in the interactions. These can arise through strongly nonlinear oscillations or nonlinear interactions between oscillators; see [150, 151, 153] and other references above. Hence, the influence of such interactions on the meanfield reductions still needs to be clarified: While they could fail in certain instances [133], first results indicate that they may still provide useful information over some timescales [136]—further work in this direction is desirable. As an example, Thiem et al. [255] recently used manifold learning and a neural network to learn the Ott–Antonsen equations governing the Kuramoto model; these techniques are quite generally applicable. Realworld networks are often modeled as systems subject to noise. Here, we point to very recent results that extend the meanfield reductions presented here in these directions by using a “circular cumulants” approach [73–75].
The second set of challenges concerns the relationship between the meanfield reductions, the underlying microscopic models, and realworld data in the context of neuroscience. How do LFP or EEG measurements relate to the meanfield variables that constitute the reduced system equations? Connectivity can be estimated via neural imaging techniques, but how does this data relate to the coupling strength and phaselag parameters that appear in the Ott–Antonsen equations of coupled Kuramoto–Sakaguchi populations? Or how does data relate to the coupling parameters of the microscopic models that are compatible with the reduction? These questions become even more intricate for coupled populations of Theta neurons; cf. [254].
The last set of challenges goes well beyond the meanfield reductions presented here. Mathematical tools are helpful to describe the dynamics, but how do the dynamics relate to functional aspects of the (neural) oscillator network? How do we identify dynamics that are pathological, and validate and use models of these dynamics to predict treatment responses? On the large scale, some pathologies such as epilepsy reveal salient abnormal dynamics [25], but alterations in other conditions are more subtle, and therefore modeldriven analyses could prove itself to be very useful in the clinical context [20, 250–252]. Insights into these fundamental questions will allow one to make the meanfield reductions presented in this review even more useful to design targeted therapies for neural diseases.
Notes
 1.
 2.
 3.
Note, however, that the mesoscopic description in terms of collective variables of each subnetwork can have a “phase” and “amplitude” such as the mean phase and the amount of synchrony.
 4.
 5.
The order “parameter” Z is an observable which encodes the state of the system, and should not be confused with a system parameter.
 6.
If only one population is considered, \(M=1\), we simply write \(\alpha_{\sigma\tau}=\alpha\) and \(K_{\sigma\tau}=K\); this corresponds to the Kuramoto–Sakaguchi model. Furthermore, if \(\alpha=0\), we recover the Kuramoto model (2). Here, we regard the number of populations M to be fixed; to take a limit \(M\to\infty\) one should assume that the coupling strengths \(K_{\sigma\tau}\) scale appropriately.
 7.
 8.
One may also consider corotating frames with timedependent frequencies. For example, for any given oscillator \((\tau, j)\) one can choose a corotating frame in which its phase \(\theta_{\tau, j}\) appears stationary via the transform \(\theta _{\sigma,k}\mapsto\theta_{\sigma,k}\theta_{\tau,j}\). This transformation changes the structure of (4) but it does not affect the qualitative dynamics.
 9.
This convention is in line with the firing of the equivalent quadratic integrate and fire neuron introduced below.
 10.
 11.
Transport equations are common in physics. There they are also known as the continuity equation (or Liouville equation in classical statistical physics describing the ensemble evolution in time) and play the important role of describing conservation laws. To visualize, in the context of fluid dynamics, the density in (10) plays the role of a mass density and (10) then implies that the total mass in the system is a conserved quantity [262].
 12.
 13.
This problem can however be solved by respecting the symmetries underlying the system, i.e., either by integrating the WS equations or the dynamic equations governing the Möbius transformations, which in turn can be used to compute trajectories for individual oscillators.
 14.
 15.
The resulting frequency distribution is curiously similar to Norbert Wiener’s notion of the frequency distribution of brain waves around the alpha wave band; see, e.g., [265].
 16.
By symmetry there is a corresponding pattern in \(\textrm {S}\textrm {D}= \lbrace R_{1}=1, R_{2}<1 \rbrace\).
 17.
A heteroclinic trajectory between two distinct saddles is a solution that is attracted to one saddle as time increases and to the other saddle as time evolves backward.
 18.
Unfortunately, the term “network” has a double meaning here: on the one hand, we study oscillatory units which form networks through their (physical and functional) interactions, on the other hand, heteroclinic networks are abstract networks of dynamical states linked by heteroclinic trajectories which allow dynamical transitions.
Abbreviations
 EEG:

electroencephapholography
 ING:

interneuronal gamma
 LFP:

local field potentials
 MEG:

magnetoencephapholography
 PING:

pyramidalinterneuronal gamma
 QIF:

Quadratic Integrate and Fire
 SNIC:

saddle node bifurcation on an invariant circle
 SNIPER:

saddlenodeinfiniteperiod
References
 1.
Winfree AT. The geometry of biological time. New York: Springer; 2001. (Interdisciplinary applied mathematics; vol. 12). https://doi.org/10.1007/9781475734843.
 2.
Strogatz SH. Nature. 2001;410(6825):268. https://doi.org/10.1038/35065725.
 3.
Breakspear M. Nat Neurosci. 2017;20(3):340. https://doi.org/10.1038/nn.4497.
 4.
Liu C, Weaver DR, Strogatz SH, Reppert SM. Cell. 1997;91(6):855. https://doi.org/10.1016/S00928674(00)804730.
 5.
Buck J, Buck E. Science. 1968;159(3821):1319. https://doi.org/10.1126/science.159.3821.1319.
 6.
Gilpin W, Bull MS, Prakash M. Nat Rev Phys. 2020. https://doi.org/10.1038/s4225401901290.
 7.
Collins JJ, Stewart I. J Nonlinear Sci. 1993;3(1):349. https://doi.org/10.1007/BF02429870.
 8.
Strogatz SH, Abrams DM, McRobie A, Eckhardt B, Ott E. Nature. 2005;438(7064):43. https://doi.org/10.1038/43843a.
 9.
Strogatz SH, Kronauer RE, Czeisler CA. Am J Physiol. 1987;253(1 Pt 2):R172. https://doi.org/10.1152/ajpregu.1987.253.1.R172.
 10.
Leloup JC, Goldbeter A. BioEssays. 2008;30(6):590. https://doi.org/10.1002/bies.20762.
 11.
Smolen P, Byrne J. Encyclopedia of neuroscience. 2009.
 12.
Zavala E, Wedgwood KC, Voliotis M, Tabak J, Spiga F, Lightman SL, TsanevaAtanasova K. Trends Endocrinol. Metab. 2019;30(4):244. https://doi.org/10.1016/j.tem.2019.01.008.
 13.
Ghosh AK, Chance B, Pye E. Arch Biochem Biophys. 1971;145(1):319. https://doi.org/10.1016/00039861(71)900427.
 14.
Danø S, Sørensen PG, Hynne F. Nature. 1999;402(6759):320. https://doi.org/10.1038/46329.
 15.
Massie TM, Blasius B, Weithoff G, et al.. Proc Natl Acad Sci USA. 2010;107(9):4236. https://doi.org/10.1073/pnas.0908725107.
 16.
Honey CJ, Kotter R, Breakspear M, Sporns O. Proc Natl Acad Sci USA. 2007. https://doi.org/10.1073/pnas.0701519104.
 17.
Honey CJ, Thivierge JP, Sporns O. NeuroImage. 2010;52(3):766. https://doi.org/10.1016/j.neuroimage.2010.01.071.
 18.
Fornito A, Zalesky A, Breakspear M. Nat Rev Neurosci. 2015;16(3):159. https://doi.org/10.1038/nrn3901.
 19.
Bassett DS, Sporns O. Nat Neurosci. 2017;20(3):353. https://doi.org/10.1038/nn.4502.
 20.
Kuhlmann L, Lehnertz K, Richardson MP, Schelter B, Zaveri HP. Nat Rev Neurol. 2018;14(10):618. https://doi.org/10.1038/s4158201800552.
 21.
Goodfellow M, Rummel C, Abela E, Richardson M, Schindler K, Terry J. Sci Rep. 2016;6:29215. https://doi.org/10.1038/srep29215.
 22.
Strogatz SH. Sync: the emerging science of spontaneous order. London: Penguin; 2004.
 23.
Glass L. Nature. 2001;410:277. https://doi.org/10.1038/35065745.
 24.
Dörfler F, Bullo F. Automatica. 2014;50(6):1539. https://doi.org/10.1016/j.automatica.2014.04.012.
 25.
Kahana MJ. J Neurosci. 2006;26(6):1669. https://doi.org/10.1523/JNEUROSCI.373705c.2006.
 26.
Lehnertz K, Geier C, Rings T, Stahn K. EPJ Nonlinear Biomed Phys. 2017;5:2. https://doi.org/10.1051/epjnbp/2017001.
 27.
Fell J, Axmacher N. Nat Rev Neurosci. 2011;12(2):105. https://doi.org/10.1038/nrn2979.
 28.
Fries P. Annu Rev Neurosci. 2009;32:209. https://doi.org/10.1146/annurev.neuro.051508.135603.
 29.
Wang XJ. Physiol Rev. 2010;90(3):1195. https://doi.org/10.1152/physrev.00035.2008.
 30.
Singer W, Gray CM, Gray Charles WS. Annu Rev Neurosci. 1995;18:555. https://doi.org/10.1146/annurev.ne.18.030195.003011.
 31.
Fries P. Trends Cogn Sci. 2005;9(10):474. https://doi.org/10.1016/j.tics.2005.08.011.
 32.
Kirst C, Timme M, Battaglia D. Nat Commun. 2016;7:11061. https://doi.org/10.1038/pj.2016.37.
 33.
Deschle N, Daffertshofer A, Battaglia D, Martens EA. Front Appl Math Stat. 2019;5:28. https://doi.org/10.3389/fams.2019.00028.
 34.
Marder E, Bucher D. Curr Biol. 2001;11:R986. https://doi.org/10.1016/S09609822(01)005814.
 35.
Smith JC, Ellenberger HH, Ballanyi K, Richter DW, Feldman JL. Science. 1991;254(5032):726. https://doi.org/10.1126/science.1683005.
 36.
Butera RJ, Rinzel J, Smith JC. J Neurophysiol. 1999;82(1):382. https://doi.org/10.1007/bf00200329.
 37.
Jones MW, Wilson MA. PLoS Biol. 2005;3(12):e402. https://doi.org/10.1371/journal.pbio.0030402.
 38.
Uhlhaas PJ, Singer W. Neuron. 2006;52(1):155. https://doi.org/10.1016/j.neuron.2006.09.020.
 39.
Hammond C, Bergman H, Brown P. Trends Neurosci. 2007;30(7):357. https://doi.org/10.1016/j.tins.2007.05.004.
 40.
Lehnertz K, Bialonski S, Horstmann MT, Krug D, Rothkegel A, Staniek M, Wagner T. J Neurosci Methods. 2009;183(1):42. https://doi.org/10.1016/j.jneumeth.2009.05.015.
 41.
Rummel C, Goodfellow M, Gast H, Hauf M, Amor F, Stibal A, Mariani L, Wiest R, Schindler K. Neuroinformatics. 2013;11:159. https://doi.org/10.1007/s1202101291612.
 42.
Słowiński P, Sheybani L, Michel CM, Richardson MP, Quairiaux C, Terry JR, Goodfellow M. eNeuro. 2019;6(4):ENEURO.005919.2019. https://doi.org/10.1523/ENEURO.005919.2019.
 43.
Ashwin P, Coombes S, Nicks R. J Math Neurosci. 2016. https://doi.org/10.1186/s1340801500336.
 44.
Pietras B, Daffertshofer A. Phys Rep. 2019. https://doi.org/10.1016/j.physrep.2019.06.001.
 45.
Ashwin P, Swift JW. J Nonlinear Sci. 1992;2(1):69. https://doi.org/10.1007/BF02429852.
 46.
Hansel D, Mato G, Meunier C. Europhys Lett. 1993;23(5):367. https://doi.org/10.1209/02955075/23/5/011.
 47.
Hoppensteadt FC, Izhikevich EM. Weakly connected neural networks. New York: Springer; 1997. (Applied mathematical sciences; vol. 126). https://doi.org/10.1007/9781461218289.
 48.
Brown E, Moehlis J, Holmes P. Neural Comput. 2004;16(4):673. https://doi.org/10.1162/089976604322860668.
 49.
Nakao H. Contemp Phys. 2016;57(2):188. https://doi.org/10.1080/00107514.2015.1094987.
 50.
Monga B, Wilson D, Matchen T, Moehlis, J. Biol Cybern. 2018. https://doi.org/10.1007/s004220180780z.
 51.
Cabral J, Hugues E, Sporns O, Deco G. NeuroImage. 2011;57(1):130. https://doi.org/10.1016/j.neuroimage.2011.04.010.
 52.
Luke TB, Barreto E, So P. Front Comput Neurosci. 2014;8:145. https://doi.org/10.3389/fncom.2014.00145.
 53.
Britz J, Van De Ville D, Michel CM. NeuroImage. 2010;52(4):1162. https://doi.org/10.1016/j.neuroimage.2010.02.052.
 54.
Destexhe A, Sejnowski TJ. Biol Cybern. 2009;101(1):1. https://doi.org/10.1007/s0042200903283.
 55.
Gupta S, Campa A, Ruffo S. Statistical physics of synchronization. Berlin: Springer; 2018.
 56.
Börgers C, Kopell N. Neural Comput. 2003;15(3):509. https://doi.org/10.1162/089976603321192059.
 57.
Buzsáki G, Wang XJ. Annu Rev Neurosci. 2012;35(1):203. https://doi.org/10.1146/annurevneuro062111150444.
 58.
Wilson HR, Cowan JD. Biol Cybern. 1973;13(2):55. https://doi.org/10.1007/BF00288786.
 59.
Amari Si. Biol Cybern. 1977;27(2):77. https://doi.org/10.1007/BF00337259.
 60.
Coombes S, Byrne Á. In: Corinto F, Torcini A, editors. Nonlinear dynamics in computational neuroscience. Cham: Springer; 2019. p. 1–16. https://doi.org/10.1007/9783319710488_1.
 61.
Strogatz SH. Nonlinear dynamics and chaos. Reading: Perseus Books Publishing; 1994.
 62.
Izhikevich EM. Dynamical systems in neuroscience: the geometry of excitability and bursting. Cambridge: MIT Press; 2007.
 63.
Panaggio MJ, Abrams DM. Nonlinearity. 2015;28(3):R67. https://doi.org/10.1088/09517715/28/3/R67.
 64.
Schöll E. Eur Phys J Spec Top. 2016;225(6–7):891. https://doi.org/10.1140/epjst/e2016026463.
 65.
Omel’chenko OE. Nonlinearity. 2018;31(5):R121. https://doi.org/10.1088/13616544/aaaa07.
 66.
Porter M, Gleeson J. Dynamical systems on networks. Cham: Springer; 2016. (Frontiers in applied dynamical systems: reviews and tutorials; vol. 4). https://doi.org/10.1007/9783319266411.
 67.
Rodrigues FA, Peron TKD, Ji P, Kurths J. Phys Rep. 2016;610:1. https://doi.org/10.1016/j.physrep.2015.10.008.
 68.
Pecora LM, Carroll TL. Phys Rev Lett. 1998;80(10):2109. https://doi.org/10.1103/PhysRevLett.80.2109.
 69.
Barahona M, Pecora LM. Phys Rev Lett. 2002;89(5):054101. https://doi.org/10.1103/PhysRevLett.89.054101.
 70.
Pereira T, Eldering J, Rasmussen M, Veneziani A. Nonlinearity. 2014;27(3):501. https://doi.org/10.1088/09517715/27/3/501.
 71.
Holme P, Saramäki J. Phys Rep. 2012;519:97. https://doi.org/10.1016/j.physrep.2012.03.001.
 72.
Bick C, Field MJ. Nonlinearity. 2017;30(2):558. https://doi.org/10.1088/13616544/aa4f62.
 73.
Tyulkina IV, Goldobin DS, Klimenko LS, Pikovsky A. Phys Rev Lett. 2018;120:264101. https://doi.org/10.1103/PhysRevLett.120.264101.
 74.
Goldobin DS, Tyulkina IV, Klimenko LS, Pikovsky A. Chaos. 2018;28(10):1. https://doi.org/10.1063/1.5053576.
 75.
Goldobin DS. Fluct Noise Lett. 2019;18(2):1940002. https://doi.org/10.1142/S0219477519400029.
 76.
Gottwald GA. Chaos. 2015;25(5):053111. https://doi.org/10.1063/1.4921295.
 77.
Skardal PS, Restrepo JG, Ott E. Chaos. 2017;27:083121. https://doi.org/10.1063/1.4986957.
 78.
Hannay KM, Forger DB, Booth V. Sci Adv. 2018;4(8):e1701047. https://doi.org/10.1126/sciadv.1701047.
 79.
Winfree AT. J Theor Biol. 1967;16(1):15. https://doi.org/10.1016/00225193(67)900513.
 80.
Kuramoto Y. Chemical oscillations, waves, and turbulence. New York: Springer; 1984.
 81.
Strogatz SH. Physica D. 2000;143:1. https://doi.org/10.1016/S01672789(00)000944.
 82.
Sakaguchi H, Kuramoto Y. Prog Theor Phys. 1986;76(3):576. https://doi.org/10.1143/PTP.76.576.
 83.
Cumin D, Unsworth CP. Physica D. 2007;226(2):181. https://doi.org/10.1016/j.physd.2006.12.004.
 84.
Breakspear M, Heitmann S, Daffertshofer A. Front Human Neurosci. 2010;4:190. https://doi.org/10.3389/fnhum.2010.00190.
 85.
Schmidt H, Petkov G, Richardson MP, Terry JR. PLoS Comput Biol. 2014;10(11):e1003947. https://doi.org/10.1371/journal.pcbi.1003947.
 86.
Ermentrout GB. Scholarpedia. 2008;3(3):1398. https://doi.org/10.4249/scholarpedia.1398.
 87.
Ermentrout GB, Kopell N. SIAM J Appl Math. 1986;46(2):233. https://doi.org/10.1137/0146017.
 88.
Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. New York: Springer; 2010. (Interdisciplinary applied mathematics; vol. 35). https://doi.org/10.1007/9780387877082.
 89.
Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge: Cambridge University Press; 2014.
 90.
Monteforte M, Wolf F. Phys Rev Lett. 2010;105(26):268104. https://doi.org/10.1103/PhysRevLett.105.268104.
 91.
Osan R, Ermentrout GB. Neurocomputing. 2001;38:789. https://doi.org/10.1016/S09252312(01)003903.
 92.
Ermentrout GB, Rubin J, Osan R. SIAM J Appl Math. 2002;62(4):1197. https://doi.org/10.1137/S0036139901387253.
 93.
Luke TB, Barreto E, So P. Neural Comput. 2013;25(12):3207. https://doi.org/10.1162/NECO_a_00525.
 94.
Laing CR. Phys Rev E. 2014;90(1):010901. https://doi.org/10.1103/PhysRevE.90.010901.
 95.
Gutkin B. In: Encyclopedia of computational neuroscience. 2015. p. 2958–65.
 96.
Latham PE, Richmond B, Nelson P, Nirenberg S. J Neurophysiol. 2000;83(2):808. https://doi.org/10.1152/jn.2000.83.2.808.
 97.
Hansel D, Mato G. Phys Rev Lett. 2001;86(18):4175. https://doi.org/10.1103/PhysRevLett.86.4175.
 98.
Brunel N, Latham PE. Neural Comput. 2003;15(10):2281. https://doi.org/10.1162/089976603322362365.
 99.
Kopell N, Ermentrout GB. Proc Natl Acad Sci USA. 2004;101(43):15482. https://doi.org/10.1073/pnas.0406343101.
 100.
Laing CR. Chaos. 2009;19(1):013113. https://doi.org/10.1063/1.3068353.
 101.
Omel’chenko OE. Nonlinearity. 2013;26(9):2469. https://doi.org/10.1088/09517715/26/9/2469.
 102.
Montbrió E, Pazó D, Roxin A. Phys Rev X. 2015;5(2):021028. https://doi.org/10.1103/PhysRevX.5.021028.
 103.
Pietras B, Daffertshofer A. Chaos. 2016;26(10):103101. https://doi.org/10.1063/1.4963371.
 104.
Mardia KV, Jupp PE. Directional statistics. Hoboken: Wiley; 1999. (Wiley series in probability and statistics). https://doi.org/10.1002/9780470316979.
 105.
Sakaguchi H. Prog Theor Phys. 1988;79(1):39. https://doi.org/10.1143/PTP.79.39.
 106.
Strogatz SH, Mirollo RE. J Stat Phys. 1991;63(3–4):613. https://doi.org/10.1007/BF01029202.
 107.
Lancellotti C. Transp Theory Stat Phys. 2005;34(7):523. https://doi.org/10.1080/00411450508951152.
 108.
Mirollo RE, Strogatz SH. J Nonlinear Sci. 2007;17(4):309. https://doi.org/10.1007/s003320060806x.
 109.
Carrillo JA, Choi YP, Ha SY, Kang MJ, Kim Y. J Stat Phys. 2014;156(2):395. https://doi.org/10.1007/s109550141005z.
 110.
Dietert H. J Math Pures Appl. 2016;105(4):451. https://doi.org/10.1016/j.matpur.2015.11.001.
 111.
Dietert H, Fernandez B, GérardVaret D. Commun Pure Appl Math. 2018;71(5):953. https://doi.org/10.1002/cpa.21741.
 112.
Carrillo JA, Choi YP, Pareschi L. J Comput Phys. 2019;376:365. https://doi.org/10.1016/j.jcp.2018.09.049.
 113.
Medvedev GS. SIAM J Math Anal. 2014;46(4):2743. https://doi.org/10.1137/130943741.
 114.
Chiba H, Medvedev GS. Discrete Contin Dyn Syst, Ser A. 2019;39:131. https://doi.org/10.3934/dcds.2019006.
 115.
Ott E, Antonsen TM. Chaos. 2008;18(3):037113. https://doi.org/10.1063/1.2930766.
 116.
Ott E, Antonsen TM. Chaos. 2009;19(2):023117. https://doi.org/10.1063/1.3136851.
 117.
Ott E, Hunt BR, Antonsen TM. Chaos. 2011;21(2):025112. https://doi.org/10.1063/1.3574931.
 118.
Martens EA, Barreto E, Strogatz SH, Ott E, So P, Antonsen TM. Phys Rev E. 2009;79(2):026204. https://doi.org/10.1103/PhysRevE.79.026204.
 119.
Pazó D, Montbrió E. Phys Rev E. 2009;80(4):046215. https://doi.org/10.1103/PhysRevE.80.046215.
 120.
Tsang KY, Mirollo RE, Strogatz SH, Wiesenfeld K. Physica D. 1991;48(1):102. https://doi.org/10.1016/01672789(91)90054D.
 121.
Wiesenfeld K, Colet P, Strogatz SH. Phys Rev E. 1998;57(2):1563. https://doi.org/10.1103/PhysRevE.57.1563.
 122.
Watanabe S, Strogatz SH. Phys Rev Lett. 1993;70(16):2391. https://doi.org/10.1103/PhysRevLett.70.2391.
 123.
Watanabe S, Strogatz SH. Physica D. 1994;74(3–4):197. https://doi.org/10.1016/01672789(94)901961.
 124.
Goebel CJ. Physica D. 1995;80(1–2):18. https://doi.org/10.1016/01672789(95)900497.
 125.
Marvel SA, Mirollo RE, Strogatz SH. Chaos. 2009;19(4):043104. https://doi.org/10.1063/1.3247089.
 126.
Stewart I. Int J Bifurc Chaos. 2011;21(6):1795. https://doi.org/10.1142/S0218127411029446.
 127.
Chen B, Engelbrecht JR, Mirollo RE. J Phys A, Math Theor. 2017;50(35):355101. https://doi.org/10.1088/17518121/aa7e39.
 128.
Engelbrecht JR, Mirollo R. Phys Rev Res. 2020;2:023057. arXiv:2002.07827.
 129.
Pikovsky A, Rosenblum M. Physica D. 2011;240(9–10):872. https://doi.org/10.1016/j.physd.2011.01.002.
 130.
Pikovsky A, Rosenblum M. Phys Rev Lett. 2008;101:264103. https://doi.org/10.1103/PhysRevLett.101.264103.
 131.
Laing CR. J Math Neurosci. 2018;8(1):4. https://doi.org/10.1186/s1340801800597.
 132.
Bick C, Timme M, Paulikat D, Rathlev D, Ashwin P. Phys Rev Lett. 2011;107(24):244101. https://doi.org/10.1103/PhysRevLett.107.244101.
 133.
Lai YM, Porter MA. Phys Rev E. 2013;88(1):012905. https://doi.org/10.1103/PhysRevE.88.012905.
 134.
Bick C, Ashwin P, Rodrigues A. Chaos. 2016;26(9):094814. https://doi.org/10.1063/1.4958928.
 135.
Ashwin P, Bick C, Burylko O. Front Appl Math Stat. 2016;2(7):7. https://doi.org/10.3389/fams.2016.00007.
 136.
Vlasov V, Rosenblum M, Pikovsky A. J Phys A, Math Theor. 2016;49(31):31LT02. https://doi.org/10.1088/17518113/49/31/31LT02.
 137.
Gottwald GA. Chaos. 2017;27(10):101103. https://doi.org/10.1063/1.5004618.
 138.
Smith LD, Gottwald GA. Chaos. 2019;29(9):093127. https://doi.org/10.1063/1.5109130.
 139.
Mirollo RE. Chaos. 2012;22(4):043118. https://doi.org/10.1063/1.4766596.
 140.
Kuznetsov YA. Elements of applied bifurcation theory. 3rd ed. New York: Springer; 2004. (Applied mathematical sciences; vol. 112).
 141.
Brown E, Holmes P, Moehlis J. In: Perspectives and problems in nonlinear science: a celebratory volume in honor of Larry Sirovich. Berlin: Springer; 2003. p. 183–215.
 142.
Crawford JD. J Stat Phys. 1994;74(5):1047. https://doi.org/10.1007/BF02188217.
 143.
Pietras B, Deschle N, Daffertshofer A. Phys Rev E. 2016;94(5):052211. https://doi.org/10.1103/PhysRevE.94.052211.
 144.
Aguiar MAD, Dias APS. Chaos. 2018;28(7):073105. https://doi.org/10.1063/1.4997385.
 145.
Tanaka T, Aoyagi T. Phys Rev Lett. 2011;106(22):224101. https://doi.org/10.1103/PhysRevLett.106.224101.
 146.
Levine JM, Bascompte J, Adler PB, Allesina S. Nature. 2017;546(7656):56. https://doi.org/10.1038/nature22898.
 147.
Ariav G, Polsky A, Schiller J. J Neurosci. 2003;23(21):7750. https://doi.org/10.1523/JNEUROSCI.232107750.2003.
 148.
Polsky A, Mel BW, Schiller J. Nat Neurosci. 2004;7(6):621. https://doi.org/10.1038/nn1253.
 149.
Memmesheimer RM. Proc Natl Acad Sci USA. 2010;107(24):11092. https://doi.org/10.1073/pnas.0909615107.
 150.
Rosenblum M, Pikovsky A. Phys Rev Lett. 2007;98(6):064101. https://doi.org/10.1103/PhysRevLett.98.064101.
 151.
Ashwin P, Rodrigues A. Physica D. 2016;325:14. https://doi.org/10.1016/j.physd.2016.02.009.
 152.
Kralemann B, Pikovsky A, Rosenblum M. New J Phys. 2014;16:085013. https://doi.org/10.1088/13672630/16/8/085013.
 153.
León I, Pazó D. Phys Rev E. 2019;100(1):012211. https://doi.org/10.1103/PhysRevE.100.012211.
 154.
Hoppensteadt FC, Izhikevich EM. Phys Rev Lett. 1999;82(14):2983. https://doi.org/10.1103/PhysRevLett.82.2983.
 155.
Skardal PS, Arenas A. Phys Rev Lett. 2019;122(24):248301. https://doi.org/10.1103/PhysRevLett.122.248301.
 156.
Acebrón J, Bonilla L, Pérez Vicente C, et al.. Rev Mod Phys. 2005;77(1):137. https://doi.org/10.1103/RevModPhys.77.137.
 157.
Pikovsky A, Rosenblum M. Chaos. 2015;25(9):097616. https://doi.org/10.1063/1.4922971.
 158.
Lee WS, Ott E, Antonsen TM. Phys Rev Lett. 2009;103(4):044101. https://doi.org/10.1103/PhysRevLett.103.044101.
 159.
Petkoski S, Spiegler A, Proix T, Aram P, Temprado JJ, Jirsa VK. Phys Rev E. 2016;94(1):012209. https://doi.org/10.1103/PhysRevE.94.012209.
 160.
Lohe MA. J Phys A, Math Theor. 2017;50(50):505101. https://doi.org/10.1088/17518121/aa98ef.
 161.
Schwartz AJ. Am J Math. 1963;85(3):453. https://doi.org/10.2307/2373135.
 162.
Golubitsky M, Stewart I. The symmetry perspective. Basel: Birkhäuser; 2002. (Progress in mathematics; vol. 200).
 163.
Abrams DM, Strogatz SH. Phys Rev Lett. 2004;93(17):174102. https://doi.org/10.1103/PhysRevLett.93.174102.
 164.
Kemeth FP, Haugland SW, Schmidt L, Kevrekidis IG, Krischer K. Chaos. 2016;26:094815. https://doi.org/10.1063/1.4959804.
 165.
Kemeth FP, Haugland SW, Krischer K. Phys Rev Lett. 2018;120(21):214101. https://doi.org/10.1103/PhysRevLett.120.214101.
 166.
Kuramoto Y, Battogtokh D. Nonlinear Phenom Complex Syst. 2002;4:380.
 167.
Martens EA, Bick C, Panaggio MJ. Chaos. 2016;26(9):094819. https://doi.org/10.1063/1.4958930.
 168.
Abrams DM, Mirollo RE, Strogatz SH, Wiley DA. Phys Rev Lett. 2008;101(8):084103. https://doi.org/10.1103/PhysRevLett.101.084103.
 169.
Martens EA, Panaggio MJ, Abrams DM. New J Phys. 2016;18(2):022002. https://doi.org/10.1088/13672630/18/2/022002.
 170.
Palmigiano A, Geisel T, Wolf F, Battaglia D. Nat Neurosci. 2017;20(7):1014. https://doi.org/10.1038/nn.4569.
 171.
Laing CR. Chaos. 2012;22(4):043104. https://doi.org/10.1063/1.4758814.
 172.
Laing CR, Rajendran K, Kevrekidis IG. Chaos. 2012;22(1):013132. https://doi.org/10.1063/1.3694118.
 173.
Choe CU, Ri JS, Kim RS. Phys Rev E. 2016;94(3):032205. https://doi.org/10.1103/PhysRevE.94.032205.
 174.
Bick C, Panaggio MJ, Martens EA. Chaos. 2018;28(7):071102. https://doi.org/10.1063/1.5041444.
 175.
Panaggio MJ, Abrams DM, Ashwin P, Laing CR. Phys Rev E. 2016;93(1):012218. https://doi.org/10.1103/PhysRevE.93.012218.
 176.
Ashwin P, Burylko O. Chaos. 2015;25:013106. https://doi.org/10.1063/1.4905197.
 177.
Bick C, Ashwin P. Nonlinearity. 2016;29(5):1468. https://doi.org/10.1088/09517715/29/5/1468.
 178.
Bick C. J Nonlinear Sci. 2017;27(2):605. https://doi.org/10.1007/s0033201693452.
 179.
Bick C, Sebek M, Kiss IZ. Phys Rev Lett. 2017;119(16):168301. https://doi.org/10.1103/PhysRevLett.119.168301.
 180.
Skardal PS. Eur Phys J B. 2019;92(2):46. https://doi.org/10.1140/epjb/e201990543x.
 181.
Montbrió E, Kurths J, Blasius B. Phys Rev E. 2004;70(5):56125. https://doi.org/10.1103/PhysRevE.70.056125.
 182.
Hong H, Strogatz SH. Phys Rev E. 2012;85(5):056210. https://doi.org/10.1103/PhysRevE.85.056210.
 183.
Maistrenko YL, Penkovsky B, Rosenblum M. Phys Rev E. 2014;89(6):060901. https://doi.org/10.1103/PhysRevE.89.060901.
 184.
Hong H, Strogatz SH. Phys Rev Lett. 2011;106(5):054102. https://doi.org/10.1103/PhysRevLett.106.054102.
 185.
Hong H, Strogatz SH. Phys Rev E. 2011;84(4):046202. https://doi.org/10.1103/PhysRevE.84.046202.
 186.
Martens EA. Phys Rev E. 2010;82(1):016216. https://doi.org/10.1103/PhysRevE.82.016216.
 187.
Martens EA. Chaos. 2010;20(4):043122. https://doi.org/10.1063/1.3499502.
 188.
Abeles M, Bergman H, Gat I, Meilijson I, Seidemann E, Tishby N, Vaadia E. Proc Natl Acad Sci USA. 1995;92(19):8616. https://doi.org/10.1073/pnas.92.19.8616.
 189.
Tognoli E, Kelso JAS. Neuron. 2014;81(1):35. https://doi.org/10.1016/j.neuron.2013.12.022.
 190.
Ashwin P, Timme M. Nature. 2005;436(7047):36. https://doi.org/10.1038/436036b.
 191.
Weinberger O, Ashwin P. Discrete Contin Dyn Syst, Ser B. 2018;23(5):2043. https://doi.org/10.3934/dcdsb.2018193.
 192.
Rabinovich MI, Varona P, Selverston A, Abarbanel HDI. Rev Mod Phys. 2006;78(4):1213. https://doi.org/10.1103/RevModPhys.78.1213.
 193.
Rabinovich MI, Afraimovich VS, Bick C, Varona P. Phys Life Rev. 2012;9(1):51. https://doi.org/10.1016/j.plrev.2011.11.002.
 194.
Hansel D, Mato G, Meunier C. Phys Rev E. 1993;48(5):3470. https://doi.org/10.1103/PhysRevE.48.3470.
 195.
Ashwin P, Orosz G, Wordsworth J, Townley S. SIAM J Appl Dyn Syst. 2007;6(4):728. https://doi.org/10.1137/070683969.
 196.
Bick C. Phys Rev E. 2018;97(5):050201. https://doi.org/10.1103/PhysRevE.97.050201.
 197.
Shanahan M. Chaos. 2010;20(1):013108. https://doi.org/10.1063/1.3305451.
 198.
Wildie M, Shanahan M. Chaos. 2012;22(4):043131. https://doi.org/10.1063/1.4766592.
 199.
Deco G, Cabral J, Woolrich MW, Stevner AB, van Hartevelt TJ, Kringelbach ML. NeuroImage. 2017;152;538. https://doi.org/10.1016/j.neuroimage.2017.03.023.
 200.
Park HJ, Friston K. Science. 2013;342(6158):1238411. https://doi.org/10.1126/science.1238411.
 201.
Komarov MA, Pikovsky A. Phys Rev Lett. 2013;110(13):134101. https://doi.org/10.1103/PhysRevLett.110.134101.
 202.
Lück S, Pikovsky A. Phys Lett A. 2011;375(28–29):2714. https://doi.org/10.1016/j.physleta.2011.06.016.
 203.
Komarov MA, Pikovsky A. Phys Rev E. 2011;84(1):016210. https://doi.org/10.1103/PhysRevE.84.016210.
 204.
Koch C. Biophysics of computation: information processing in single neurons. Oxford: Oxford University Press; 2004.
 205.
So P, Luke TB, Barreto E. Physica D. 2014;267:16. https://doi.org/10.1016/j.physd.2013.04.009.
 206.
Laing CR. In: Moustafa AA, editor. Computational models of brain and behavior. New York: Wiley; 2017. Chap. 37, p. 505–18.
 207.
Devalle F, Roxin A, Montbrió E. PLoS Comput Biol. 2017;13(12):e1005881. https://doi.org/10.1371/journal.pcbi.1005881.
 208.
Ratas I, Pyragas K. Phys Rev E. 2017;96(4):042212. https://doi.org/10.1103/PhysRevE.96.042212.
 209.
Ceni A, Olmi S, Torcini A, AnguloGarcia D. arXiv:1908.07954 (2019).
 210.
Coombes S. SIAM J Appl Dyn Syst. 2008;7(3):1101. https://doi.org/10.1137/070707579.
 211.
Laing CR. SIAM J Appl Dyn Syst. 2015;14(4):1899. https://doi.org/10.1137/15M1011287.
 212.
Pietras B, Devalle F, Roxin A, Daffertshofer A, Montbrió E. Phys Rev E. 2019;100(4):042412. https://doi.org/10.1103/PhysRevE.100.042412.
 213.
Ariaratnam JT, Strogatz SH. Phys Rev Lett. 2001;86(19):4278. https://doi.org/10.1103/PhysRevLett.86.4278.
 214.
Pazó D, Montbrió E. Phys Rev X. 2014;4(1):011009. https://doi.org/10.1103/PhysRevX.4.011009.
 215.
Schultheiss NW, Prinz AA, Butera RJ. Phase response curves in neuroscience: theory, experiment, and analysis. Berlin: Springer; 2011.
 216.
Gallego R, Montbrió E, Pazó D. Phys Rev E. 2017;96(4):042208. https://doi.org/10.1103/PhysRevE.96.042208.
 217.
Dumont G, Ermentrout GB, Gutkin B. Phys Rev E. 2017;96(4):042311. https://doi.org/10.1103/PhysRevE.96.042311.
 218.
Laing CR. Chaos. 2016;26(9):094802. https://doi.org/10.1063/1.4953663.
 219.
EsnaolaAcebes JM, Roxin A, Avitabile D, Montbrió E. Phys Rev E. 2017;96(5):052407. https://doi.org/10.1103/PhysRevE.96.052407.
 220.
Byrne Á, Avitabile D, Coombes S. Phys Rev E. 2019;99(1):012313. https://doi.org/10.1103/PhysRevE.99.012313.
 221.
Laing CR, Omel’chenko O. Chaos. 2020;30(4):043117. https://doi.org/10.1063/1.5143261.
 222.
Chandra S, Hathcock D, Crain K, Antonsen TM, Girvan M, Ott E. Chaos. 2017;27(3):033102. https://doi.org/10.1063/1.4977514.
 223.
Blasche C, Means S, Laing CR. J Comput Dyn. 2020;to appear. arXiv:2004.00206.
 224.
Laing CR, Bläsche C. Biol Cybern. 2020. https://doi.org/10.1007/s00422020008220.
 225.
Schmidt H, Avitabile D, Montbrió E, Roxin A. PLoS Comput Biol. 2018;14(9):1. https://doi.org/10.1371/journal.pcbi.1006430.
 226.
Di Volo M, Torcini A. Phys Rev Lett. 2018;121(12):128301. https://doi.org/10.1103/PhysRevLett.121.128301.
 227.
Dumont G, Gutkin B. PLoS Comput Biol. 2019;15(5):e1007019. https://doi.org/10.1371/journal.pcbi.1007019.
 228.
Bi H, Segneri M, di Volo M, Torcini A. Phys Rev Res. 2020;2(1):013042. https://doi.org/10.1103/PhysRevResearch.2.013042.
 229.
Keeley S, Byrne Á, Fenton A, Rinzel J. J Neurophysiol. 2019;121(6):2181. https://doi.org/10.1152/jn.00741.2018.
 230.
Devalle F, Montbrió E, Pazó D. Phys Rev E. 2018;98(4):042214. https://doi.org/10.1103/PhysRevE.98.042214.
 231.
Ratas I, Pyragas K. Phys Rev E. 2018;98(5):052224. https://doi.org/10.1103/PhysRevE.98.052224.
 232.
Akao A, Shirasaka S, Jimbo Y, Ermentrout B, Kotani K. arXiv:1903.12155 (2019).
 233.
Jonmohamadi Y, Poudel G, Innes C, Jones R. NeuroImage. 2014;101:720. https://doi.org/10.1016/j.neuroimage.2014.07.052.
 234.
Hassan M, Dufor O, Merlet I, Berrou C, Wendling F. PLoS ONE. 2014;9(8):e105041. https://doi.org/10.1371/journal.pone.0105041.
 235.
Stankovski T, Pereira T, McClintock PVE, Stefanovska A. Rev Mod Phys. 2017;89(4):045001. https://doi.org/10.1103/RevModPhys.89.045001.
 236.
Timme M, Casadiego J. J Phys A. 2014;47(34):343001. https://doi.org/10.1088/17518113/47/34/343001.
 237.
Friston KJ. Brain Connect. 2011;1(1):13. https://doi.org/10.1089/brain.2011.0008.
 238.
Wang HE, Friston KJ, Bénar CG, Woodman MM, Chauvel P, Jirsa V, Bernard C. NeuroImage. 2018;166:167. https://doi.org/10.1016/j.neuroimage.2017.10.036.
 239.
Garcés P, Pereda E, HernándezTamames JA, DelPozo F, Maestú F, Ángel PinedaPardo J. Hum Brain Mapp. 2016;37(1):20. https://doi.org/10.1002/hbm.22995.
 240.
ValdesSosa PA, Roebroeck A, Daunizeau J, Friston K. NeuroImage. 2011;58(2):339. https://doi.org/10.1016/j.neuroimage.2011.03.058.
 241.
Stam CJ. Nat Rev Neurosci. 2014;15(10):683. https://doi.org/10.1038/nrn3801.
 242.
Bastos AM, Vezoli J, Fries P. Curr Opin Neurobiol. 2015;31:173. https://doi.org/10.1016/j.conb.2014.11.001.
 243.
Bassett DS, Zurn P, Gold JI. Nat Rev Neurosci. 2018;19(9):566. https://doi.org/10.1038/s4158301800388.
 244.
Senden M, Deco G, De Reus MA, Goebel R, Van Den Heuvel MP. NeuroImage. 2014;96:174. https://doi.org/10.1016/j.neuroimage.2014.03.066.
 245.
Demirtaş M, Falcon C, Tucholka A, Gispert JD, Molinuevo JL, Deco G. NeuroImage Clin. 2017;16:343. https://doi.org/10.1016/j.nicl.2017.08.006.
 246.
Misic B, Betzel RF, Reus MAD, Heuvel MPVD, Berman MG, Mcintosh AR, Sporns O. Cereb Cortex. 2016;26:3285. https://doi.org/10.1093/cercor/bhw089.
 247.
Shen K, Hutchison RM, Bezgin G, Everling S, McIntosh AR. J Neurosci. 2015;35(14):5579. https://doi.org/10.1523/JNEUROSCI.490314.2015.
 248.
Dauwels J, Vialatte F, Musha T, Cichocki A. NeuroImage. 2010;49(1):668. https://doi.org/10.1016/j.neuroimage.2009.06.056.
 249.
Lehnertz K, Ansmann G, Bialonski S, Dickten H, Geier C, Porz S. Physica D. 2014;267:7. https://doi.org/10.1016/j.physd.2013.06.009.
 250.
Schmidt H, Woldman W, Goodfellow M, Chowdhury FA, Koutroumanidis M, Jewell S, Richardson MP, Terry JR. Epilepsia. 2016;57(10):e200. https://doi.org/10.1111/epi.13481.
 251.
Tait L, Stothart G, Coulthard E, Brown JT, Kazanina N, Goodfellow M. Clin Neurophysiol. 2019;130(9):1581. https://doi.org/10.1016/j.clinph.2019.05.027.
 252.
Weerasinghe G, Duchet B, Cagnan H, Brown P, Bick C, Bogacz R. PLoS Comput Biol. 2019;15(8):e1006575. https://doi.org/10.1371/journal.pcbi.1006575.
 253.
Cagnan H, Pedrosa D, Little S, Pogosyan A, Cheeran B, Aziz T, Green A, Fitzgerald J, Foltynie T, Limousin P, Zrinzo L, Hariz M, Friston KJ, Denison T, Brown P. Brain. 2017;140(1):132. https://doi.org/10.1093/brain/aww286.
 254.
Byrne Á, O’Dea R, Forrester M, Ross J, Coombes S. J Neurophysiol. 2020;123:726. https://doi.org/10.1152/jn.00406.2019.
 255.
Thiem TN, Kooshkbaghi M, Bertalan T, Laing CR, Kevrekidis IG. Front Comput Neurosci. 2020;14: 36. https://doi.org/10.3389/fncom.2020.00036.
 256.
van Vreeswijk C, Sompolinsky H. Science. 1996;274(5293):1724. https://doi.org/10.1126/science.274.5293.1724.
 257.
van Vreeswijk C, Sompolinsky H. Neural Comput. 1998;10(6):1321. https://doi.org/10.1162/089976698300017214.
 258.
Barreto E, Hunt B, Ott E, So P. Phys Rev E. 2008;77(3):036107. https://doi.org/10.1103/PhysRevE.77.036107.
 259.
Kivelä M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA. J Complex Netw. 2014;2(3):203. https://doi.org/10.1093/comnet/cnu016.
 260.
Gong CC, Pikovsky A. Phys Rev E. 2019;100(6):062210. https://doi.org/10.1103/PhysRevE.100.062210.
 261.
Swift JW, Strogatz SH, Wiesenfeld K. Physica D. 1992;55(3–4):239. https://doi.org/10.1016/01672789(92)90057T.
 262.
Landau L, Lifshitz E. Course of theoretical physics. Volume 6: fluid mechanics. London: Pergamon Press; 1959.
 263.
Skardal PS. Phys Rev E. 2018;98(2):022207. https://doi.org/10.1103/PhysRevE.98.022207.
 264.
Pietras B, Deschle N, Daffertshofer A. Phys Rev E. 2018;98(6):062219. https://doi.org/10.1103/PhysRevE.98.062219.
 265.
Strogatz SH. In: Frontiers in mathematical biology. Berlin: Springer; 1994. p. 122–38. (Lecture notes in biomathematics; vol. 100).
Acknowledgements
We are grateful to R. Bogacz, B. Duchet, B. Jüttner, K. Lehnertz, and W. Woldman for helpful feedback on the manuscript, and to R. Mirollo and J. Engelbrecht for helpful discussions. We thank the anonymous referees for careful reading and suggestions that helped to improve this review. This article is part of the research activity (EAM, CL) of the Advanced Study Group 2017 “From Microscopic to Collective Dynamics in Neural Circuits” held at Max Planck Institute for the Physics of Complex Systems in Dresden (Germany).
Availability of data and materials
No new data was generated in this study.
Funding
MG gratefully acknowledges the financial support of the EPSRC via grants EP/P021417/1 and EP/N014391/1. MG also acknowledges the generous support of a Wellcome Trust Institutional Strategic Support Award (https://wellcome.ac.uk/) via grant WT105618MA.
Author information
Affiliations
Contributions
All authors contributed to writing the manuscript. All authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare to have no competing interests.
Consent for publication
Not applicable.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bick, C., Goodfellow, M., Laing, C.R. et al. Understanding the dynamics of biological and neural oscillator networks through exact meanfield reductions: a review. J. Math. Neurosc. 10, 9 (2020). https://doi.org/10.1186/s13408020000869
Received:
Accepted:
Published:
Keywords
 Network dynamics
 Coupled oscillators
 Neural networks
 Meanfield reductions
 Ott–Antonsen reduction
 Watanabe–Strogatz reduction
 Kuramoto model
 Winfree model
 Theta neuron model
 Quadratic integrateandfire neurons
 Neural masses
 Structured networks