Identification of Criticality in Neuronal Avalanches: II. A Theoretical and Empirical Investigation of the Driven Case
 Caroline Hartley^{1, 2},
 Timothy J Taylor^{3},
 Istvan Z Kiss^{4},
 Simon F Farmer^{5, 6} and
 Luc Berthouze^{2, 3}Email author
https://doi.org/10.1186/2190856749
© C. Hartley et al.; licensee Springer 2014
Received: 16 September 2013
Accepted: 20 March 2014
Published: 25 April 2014
Abstract
The observation of apparent power laws in neuronal systems has led to the suggestion that the brain is at, or close to, a critical state and may be a selforganised critical system. Within the framework of selforganised criticality a separation of timescales is thought to be crucial for the observation of powerlaw dynamics and computational models are often constructed with this property. However, this is not necessarily a characteristic of physiological neural networks—external input does not only occur when the network is at rest/a steady state. In this paper we study a simple neuronal network model driven by a continuous external input (i.e. the model does not have an explicit separation of timescales from seeding the system only when in the quiescent state) and analytically tuned to operate in the region of a critical state (it reaches the critical regime exactly in the absence of input—the case studied in the companion paper to this article). The system displays avalanche dynamics in the form of cascades of neuronal firing separated by periods of silence. We observe partial scalefree behaviour in the distribution of avalanche size for low levels of external input. We analytically derive the distributions of waiting times and investigate their temporal behaviour in relation to different levels of external input, showing that the system’s dynamics can exhibit partial longrange temporal correlations. We further show that as the system approaches the critical state by two alternative ‘routes’, different markers of criticality (partial scalefree behaviour and longrange temporal correlations) are displayed. This suggests that signatures of criticality exhibited by a particular system in close proximity to a critical state are dependent on the region in parameter space at which the system (currently) resides.
Keywords
1 Introduction
In recent years, apparent power laws (i.e. where a power law is the best model for the data using a model selection approach [1, 2]) have been observed experimentally in neurophysiological data leading to the suggestion that the brain is a critical system [3, 4]. These observations have included that of neuronal avalanches—cascades of neuronal firing recorded in vivo and in vitro whose size and duration appear to follow powerlaw distributions [5–9]. Recently it has been claimed that equivalent neuronal avalanche behaviour with the same powerlaw relationship can be identified in human MEG (magnetoencephalography) recordings [10]. On a wider scale, fluctuations in oscillation amplitude in human (adult and child) EEG (electroencephalography) and MEG exhibit a powerlaw decay of the autocorrelation function of the signal—a property known as longrange temporal correlations (LRTCs) [4, 11–15]. These observations and the idea that the brain is a critical system have drawn much attention as critical systems have been shown to exhibit optimal dynamic range and optimal information processing [16, 17]. Moreover, it has led to the hypothesis that brain dynamics may fit within the framework of selforganised criticality (SOC), i.e. a system that does not require external tuning of parameters to reach the critical state [4, 18, 19].
While the observation of power laws within neuronal activity may be attractive we must address the issue of whether (specifically) a neuronal system in the region of a critical state can produce this type of dynamics. Propagation of the spiking of neurons within a network has been interpreted within the context of percolation dynamics and the theory of branching processes [20, 21]. A critical branching process is a process such that one active node will activate on average one other node at the next time step and so one can discern how this would relate to neuronal systems whereby the system is critical if one active neuron on average activates one other neuron at the next time step. A critical branching process will display powerlaw dynamics, however, a number of assumptions underlying branching processes do not hold true in neurophysiological systems. Firstly, the theoretical analysis of branching processes relies on fullsampling of the system. Fullsampling is unlikely to occur in the experimental setting and this can have a profound effect on the distribution [22]. Additionally, reentrant connections invalidate the standard theory of branching processes [21] and so this brings into question the idea that neuronal systems can be modelled as critical branching processes. Moreover, the strict definition of a critical system is one that operates at a secondorder phase transition which applies only to systems with infinite degrees of freedom. Therefore, we should expect a critical system to exhibit an exact powerlaw distribution in the case of infinite size but what should we expect if the system is finite? As neuronal systems are necessarily finite this is an important question in the neuroscience field but one that has yet to be fully addressed. Within experimental results this fact has been accounted for by the concept of finitesize effects—where a power law is observed up to a cutoff value [2, 5, 6, 19]. This cutoff value has been suggested to coincide with the size of the system and distributions from networks of different sizes have been shown to exhibit an exact scaling relationship—a phenomenon known as finitesize scaling [2, 23]. However, the finitesize effect with a cutoff value at system size has been assumed without analytical derivation (though, see the companion paper to this article [24], as described below) and the questions of how a finite critical system behaves and what types of dynamics are possible for such a system remain open in the field. Whether a finitesize system should display the same signatures of criticality as the system in the limit of system size is not known.
In the companion paper to this article [24] we examined a computational model of a finite neuronal system analytically tuned to its critical state, defined as a transcritical bifurcation. There we showed that the dynamics of the system, which by analogy with experimental neuronal avalanches could be termed avalanches (discrete cascades of neuronal firing), exhibited scaling which does not follow a power law but does exhibit partial scalefree behaviour. We were able to show that the cutoff value is approximately the system size, as suggested experimentally by the finitesize effect, but it is analytically related to the lead eigenvalue of the transition matrix (the matrix of all possible transitions at each simulation step). This is an important observation given that avalanches in systems with reentrant connections could in principle be of infinite size and yet experimental observations have suggested that neuronal avalanches exhibit a finitesize cutoff [2, 5]. Overall, the results suggested that finite systems at criticality exhibit signatures of critical systems dynamics but do not (at least in this instance) exhibit exact power laws as had previously been suggested.
While the system studied in the companion paper leads us to a greater understanding of the dynamics displayed by a finite neuronal system, there is still an important difference between the system studied there and physiological neuronal systems. In the companion paper the system was seeded by setting a single neuron in the network into the active state and an avalanche was defined as the firing that occurred until the network returned to a stable state (the fully quiescent state). After this point no more firing could occur until the system was reseeded. This imposed a separation of timescales, with all avalanches and neuronal firing occurring on a much faster timescale than the timescale of the ‘external input’ reseeding the system. Many other computational models have also taken this approach [18, 25, 26], with a separation of timescales thought to be necessary for the observation of selforganised critical dynamics [23]. While a separation of timescales is likely to occur in some natural systems such as earthquakes, where friction in the Earth’s plates build up over the course of years but energy is released in a matter of minutes, this is not a physiologically realistic assumption for a neuronal system. External input (be it from the environment or other areas of the nervous system) will not arrive only once the neuronal population has returned to a set state. Before physiological recordings can be interpreted within the field of critical systems we must address the question of the types of dynamics that should be expected by not only a finitesize system but also a system that is driven by a physiologically realistic external input. Can a finitesize system without an explicit separation of timescales in the region of a critical regime exhibit markers of criticality? How might the external input to the system affect these markers?
Previous authors examining computational neuronal networks with continuous driving (i.e. no explicit separation of timescales) have observed powerlaw dynamics [16, 27–29]. In particular, Kinouchi and Copelli [16] and Larremore et al. [29] analytically determined the parameters required such that the model they studied was at criticality and displayed peak dynamic range, in fully connected networks and networks with a range of topologies, respectively. However, these authors did not explicitly examine the firing dynamics of the system in the region of the critical regime, concentrating on average activity levels. In a SOC system such as the sandpile model [18] the waiting times (periods of inactivity between avalanches) have been shown to follow an exponential distribution [30]. However, these waiting times are related to the reseeding of the system—sand is added to cells chosen at random and the next avalanche begins when a cell exceeds the threshold. In contrast, recent experimental work has shown that waiting times between neuronal avalanches in cultures have a distribution with two trends—a (short) initial powerlaw region thought to relate to neuronal upstates and a bump in the distribution at longer waiting times thought to relate to neuronal down states [31]. Could this difference in these waiting time distributions (between the SOC sandpile model and the neuronal avalanches in culture) be explained by the fact that physiological neuronal systems do not have a separation of timescales?
As described above, another signature of criticality that has been reported in neural systems is the presence of LRTCs. In the majority of cases they have been observed in large scale neuronal signals such as human brain oscillations. Recent endeavours have been made to link these observations of scalefree behaviour on large scales with neuronal avalanches [32, 33]. Poil et al. demonstrated in a computational neuronal network that powerlaw distributed avalanches and LRTCs in oscillations emerge concurrently. In addition, LRTCs have also been detected in the waiting times of bursts of activity in cultures [34] and the discontinuous burst activity recorded in the EEG of extremely preterm human neonates [35]. Thus, LRTCs have been demonstrated in discrete neuronal activity yet they have not been examined in the waiting times of neuronal avalanches themselves. While LRTCs in avalanche activity would not be possible in a seeded computational system (where the activity is initiated ‘by hand’ and there is no memory within the system’s dynamics) it is conceivable that a driven system, which is more akin to physiological networks which can display LRTCs, might display this type of dynamics in the waiting times of neuronal avalanches.
 1.
Assuming that the brain, or population of neurons under study, operates in the region of a critical regime can it be expected to display powerlaw statistics given that it is a finitesize system? If not what distribution should we expect? As discussed, this question was also addressed in the companion paper [24], where we studied a system without an external input. However, here we specifically consider this question in the context of a driven system.
 2.
Can we expect a finitesize neuronal system in the region of a critical regime to exhibit other markers of criticality, and specifically the presence of LRTCs? Does the presence of LRTCs relate to that of powerlaw distributions? As described above, LRTCs have been observed in neurophysiological data sets. However, a full theoretical examination of how LRTCs may relate to other markers of criticality in neuronal systems is lacking.
 3.
How are signatures of criticality (powerlaw distributions and LRTCs) affected by proximity to the critical regime? One might assume that a system which is closer to a critical regime may exhibit signatures of criticality, whereas a system that is further from the critical regime will not. Importantly, our analysis shows that this assumption is in fact not (always) true.
Although these questions are particularly applicable and novel to the field of neuroscience, it should be noted that similar questions have been the subject of much research in the field of statistical physics; see [36] for one review. Moreover, Markovian neural models with saturating firing functions have been suggested previously to fall within the universality class of directed percolation when analysed at the continuum and thermodynamic limit [37]. However, it is important to make the distinction between criticality in the statistical mechanics sense (i.e. defined in terms of a secondorder phase transition in a system with infinite degrees of freedom) and criticality in the mathematical sense (i.e. defined in terms of a bifurcation in a lowdimensional mean field model) such as used in our work. This paper will show some shared phenomenology although it should be clear that properties, in particular, universal properties, associated with some classes of critical phenomena in the statistical mechanics sense should not necessarily be expected from either our or other related neuroscience models.
In this paper, as in the companion paper, we examine a purely excitatory (in terms of synaptic transmission—see Discussion) stochastic neuronal model. As in the companion paper, a number of assumptions are made to simplify the model with the outcome that it is analytically tractable and therefore can be tuned to operate in the region of a critical regime. This approach is taken as it allows direct exploration of the above questions, which would not be possible with a more complex system. We begin by examining the distributions of avalanche size and duration, investigating the presence of scalefree behaviour. We also show that as the system approaches the theoretical critical regime by decreasing the external input, there is a change in the distributions of avalanche characteristics with the appearance of partial scalefree behaviour in avalanche size. It is important to note that the definition of avalanches strongly depends on the choice of binning method. In the literature different definitions of avalanches are used in models with seeded systems and with systems where the dynamics is continuous (including physiological recordings). We will return to this in the Discussion.
Unlike in the companion paper where the system was seeded after each avalanche, the system studied here was driven by a continuous external input. This allowed us to additionally assess the waiting times, which are intrinsic to the system, and we were able to analytically derive the distribution of waiting times. We then investigated the presence of partial LRTCs in the empirically derived waiting times. Finally, we showed that as the system size increases (and the system approaches the theoretical critical regime from a different route) the range over which the correlations extend also increases. Overall we find that the system displays different signatures of criticality depending on the region of the parameter space around the critical regime.
2 The Model
In this paper, as in the companion paper, we study a stochastic model based on that of Benayoun et al. [38], an extended version of the previously introduced stochastic rate model [37, 39]. Though greatly simplified from a physiological neural network, the model is chosen as it is analytically tractable and thus enables direct derivation of the parameters such that there is a critical regime. With this approach it is therefore possible to assess the dynamics of a neuronal system in the region of (or at) a critical regime. While Benayoun et al. considered a network with both excitatory and inhibitory connections, we simplify the system further (as in the companion paper), considering a network with purely excitatory synaptic connections. As will be discussed later, this type of network can be set within the context of early brain development.
where ${s}_{i}(t)={\sum}_{j}\frac{{w}_{ij}}{N}{a}_{j}(t)+{h}_{i}(t)$ is the input to neuron i, f is an activation function, ${h}_{i}(t)$ is the external input to neuron i, ${w}_{ij}$ is the connection strength from neuron i to neuron j and ${a}_{j}(t)=1$ if neuron j is active at time t and zero otherwise. Finally, α is a constant rate at which neurons change from the active to inactive (quiescent) state.
 1.
The synaptic connection strengths are the same for all connections with ${w}_{ij}=w>0$.
 2.
The external input is constant to all neurons and at all simulation steps so that ${h}_{i}(t)=h>0$.
 3.
The activation function is linear with $f(x)=x$.
As stated in the companion paper, we can use this equation to analyse the stability of the system about the fixed point and determine the parameters for which the system is at the threshold of stability, i.e. when the fixed point is critical. This threshold occurs when the eigenvalue (λ) of the fixed point is zero, which can alternatively be stated, borrowing terms from the epidemiology literature, as ${R}_{0}=1$ (the basic reproductive ratio). Moreover, this is also equivalent to a branching parameter of one. In the companion paper it was shown that with $h=0$, ${R}_{0}=\frac{w}{\alpha}$ and so for ${R}_{0}=\frac{w}{\alpha}=1\Rightarrow \alpha =w$ the system is critical.
For a fixed point to be critical we require that both these equations be satisfied. However, solving them simultaneously we find that there are no real roots when $w,h,N>0$. This implies that there is no parameter region such that the system (with this activation function and positive external input) has a critical fixed point. However, considering again the case with no external input ($h=0$) for which the critical state occurred with parameters $\alpha =w$, if this system is driven by a ‘sufficiently low’ level of external input it should still be within the region of the critical state. There has been some suggestion that the brain is not directly at a critical point but is in fact just very close to the critical regime and it has been speculated that the brain may actually be slightly supercritical [32]. Additionally, it been shown that a computational model of neuronal avalanches which follows a SOC approach [25] is actually a system that ‘hovers’ close to the critical state [23]. Therefore, the question of how a finite driven system behaves in the region of a critical regime is pertinent to the neuroscience field.
As $N\to \mathrm{\infty}$, $\lambda \to 0$ (see Fig. 2). Thus, for this level of the external input ($h=1/N$), as the system size (N) increases the system approaches the critical state (as the system reaches the critical state exactly when the eigenvalue $\lambda =0$). We will examine the effect on the dynamics of decreasing the external input, thereby allowing the system to approach the critical regime. We will also investigate an alternative route to the critical regime by increasing the system size in systems with a constant (overall) level of external input.
2.1 Model Simulations and Burst Analysis
As in the companion paper and in Benayoun et al. [38], simulations of the network dynamics were carried out using the Gillespie algorithm for stochastic simulations [40]. Briefly, at each step in the simulation

The total transition rate r for all the neurons within the network is calculated, with $r={r}_{aq}+{r}_{qa}$ where ${r}_{aq}$ is the total rate of active → quiescent transitions and is given by ${r}_{aq}=\alpha A$ and ${r}_{qa}$ is the total rate of all quiescent → active transitions which is given by ${r}_{qa}=f({s}_{i})(NA)$.

The time to the next transition dt is selected at random from an exponential distribution of rate r.

The type of transition is selected by generating a random number $n\in [0,1]$. If $n<\frac{{r}_{aq}}{r}$ then a randomly chosen active neuron becomes quiescent, otherwise a (randomly chosen) quiescent neuron switches to the active state.
At each step in the simulation a single neuron makes a transition, though the rate at which transitions occur changes and so the simulation step changes. If the network is in a fully quiescent state ($Q=N$) then, with positive external input, ${r}_{aq}=0$ but ${r}_{qa}=hN$ and consequently there will necessarily be a transition of a neuron from the quiescent to the active state. Similarly, when the network is in the fully active state ($A=N$) ${r}_{qa}=0$ but ${r}_{aq}=\alpha N$ and so there will necessarily be a transition of a randomly chosen neuron from the active to the quiescent state. From all other starting points transitions from the active to the quiescent or from the quiescent to the active state are possible. Thus, from all network states one neuron will change state. This is unlike the companion paper where with no external input the network must be seeded when in the fully quiescent state. Instead in this case network dynamics is continuous (i.e. no reseeding is required) and are of finite length only insofar as they are restricted by simulation lengths.
This burst dynamics is analogous to the neuronal avalanches observed experimentally in that they are discrete cascades of firing. Neuronal avalanches observed experimentally in physiological networks are so called because they have sizes which are distributed according to a powerlaw and while the size distribution of the burst activity in this network has yet to be presented we will refer to the activity throughout the rest of this paper as avalanches due to their discrete burst behaviour. To determine the distribution of the avalanches we divided the activity into individual avalanches using the approach of Benayoun et al. [38]. This method divides consecutive neuronal spiking between any two neurons within the network into separate avalanches if the time difference between the spikes is greater than the average difference (δt) between consecutive spikes within the simulation. This approach (referred to later in the text as the binning method) is similar to the method used to define neuronal avalanches within physiological data [5, 6]—though the choice of binning method will be discussed later in the paper. It is important to note that this binning approach used to define avalanches was not used in the companion paper, where an avalanche was defined as all firing that occurred before the network reached the fully quiescent state and was reseeded. This has been used as a standard classification for discontinuous data, stemming from the sandpile model of criticality [18]. However, as the firing dynamics here continues for the entire simulation it was instead appropriate to use an approach that had been used previously for continuous dynamics.
Throughout the remainder of this paper we examine characteristics of these avalanches: namely the size and duration of avalanches as well as the interavalanche intervals (IAIs). The size of an avalanche is defined (in the standard way) as the number of firings within the avalanche. If a single neuron fires more than once within a single avalanche it is also counted more than once. The duration of an avalanche is defined as the time between the start of the avalanche (the first neuron firing) and the end of the avalanche. Note that if the avalanche consists of a single neuron firing then the duration of the avalanche is 0 (and the size of the avalanche is 1). Similarly, an IAI is defined as the time between the end of one avalanche and the start of the next avalanche, i.e. the waiting time between avalanches. Note that the minimum IAI is bounded below by δt as a separation between two consecutive spikes of greater than δt defines separate avalanches.
2.2 Distributions of Avalanche Size and Duration
It is worth considering what leads to the changes seen in the distributions as the level of external input is varied. As stated, as the level of external input decreases, the system approaches the critical regime and so it is perhaps not surprising that signatures of criticality (i.e. scalefree behaviour) emerge in the distribution of avalanche size as the external input is lowered. Examining the raster plots of firing for the different levels of external input, see Fig. 3, we see that for lower levels the avalanches are further apart and more distinct. While the external input itself is continuous, at the lower levels of external input there is a separation of timescales, where one avalanche always finishes well before the next avalanche begins. The distribution therefore appears to follow similar characteristics to a system with a built in separation of timescales and we confirm that the distribution is similar to that found in the companion paper (in which the model had an explicit separation of timescales, i.e. the system was only seeded once it had reached the quiescent state) where an exponent close to 1.5 was also observed for the distribution of avalanche size. As the external input is increased there are no longer such distinct periods between avalanches. This leads to a superposition effect, with the next (actual) avalanche starting before the previous avalanche has finished (i.e. a new network cascade is initiated before the previous one has finished). This leads to these ‘avalanches’ being defined using the binning approach as a single avalanche (see Discussion). The scalefree behaviour in the distributions of avalanche size and duration is therefore lost.
2.3 Theoretical Derivation of the Distribution of the IAIs and Comparison with Simulated Data
The temporal patterning of activity within networks of neurons has long been investigated as a property of key importance, with neural rate and temporal coding suggested as potential substrates for information propagation. While it remains to be fully determined how different neuronal firing properties may lead to information transfer this suggests that in addition to the distribution of avalanches sizes the intervals between avalanches need to be considered as a functional entity in their own right. As well as determining the IAI distributions through simulations we found it is possible to derive the theoretical distribution. In this section we derive this theoretical distribution and compare it with results from simulations.
We begin by noting that a single IAI is a period during which there is no neuronal firing, i.e. neurons can only be switching from the active to the quiescent states or an IAI may be a period with a single quiescent to active transition which is preceded by another quiescent to active transition. We wish to derive the distribution of these periods. Let us initially ignore the fact that there is a minimum duration (δt) of an IAI and first consider the distribution of all consecutive active to quiescent transitions (we will return to the distribution of single quiescent to active transitions later).
2.3.1 Distribution of Consecutive Active to Quiescent Transitions
Whilst this involves higherorder derivatives a closedform solution is provided by Amari and Misra [43].
2.3.2 Probability Distribution of the Initial Number of Active Neurons
By substituting this value back into the set of equations (Eq. 6) the probabilities for the full system can be calculated.
2.3.3 Generalisation to a System of Any Size N
2.3.4 Distribution of Single Quiescent to Active Transitions
2.3.5 The IAI Distribution
As an aside, note that due to the product in the hypoexponential (see Eq. 2) determination of the probabilities for large N can become computationally intractable. For simulations larger than with $N=50$ we therefore only determined the theoretical distribution up to a set level of the number of active neurons. We set the threshold level of the number of active neurons according to the probability distribution of starting from a particular number of active neurons (calculating the cumulative probability from zero active neurons), and sufficiently low so that the calculations were computationally viable. However, the theoretical distributions calculated using this threshold are still a good fit to the simulated data—see Fig. 9.
2.3.6 Distributions of Avalanche Size and Duration
As we have shown, the theoretical distribution of IAIs can be calculated by assessing periods of consecutive active to quiescent transitions and single quiescent to active transitions. It is also possible to derive the distribution of consecutive quiescent to active transitions. However, if a period of active to quiescent transitions (a period without firing) has a duration less than the average time difference between two spikes then this interval does not separate an avalanche into two. Therefore, the distributions of number and length of consecutive quiescent to active transitions does not describe the distributions of avalanche size and duration—these distributions can also contain periods of active to quiescent transitions within two or more periods of quiescent to active transitions. Note that a period of active to quiescent transitions having a length less than the average difference between consecutive spikes is not dependent on the number of active to quiescent transitions within the interval, as the length of each transition is drawn at random from an exponential distribution. It was therefore not possible for us to determine a theoretical distribution of avalanche size and duration using this approach.
2.4 Statistical Comparison with a PowerLaw Distribution
The influential paper by Clauset et al. [1] developed a model selection based methodology to determine whether empirical data is likely to be powerlaw distributed. This method has been used to assess physiological neuronal avalanches and the results have shown that the powerlaw hypothesis is not rejected for this data [2]. It is therefore of interest to determine whether this is also the case for the data from the model studied here. Briefly, this method finds the best fit to a power law of the distribution under study. The empirical data is then compared to distributions of the same size that are generated by randomly drawing values to follow the bestfit powerlaw distribution. A pvalue is calculated as the proportion of times that the empirical data is a better fit to the power law than the generated data (using the Kolmogorov–Smirnov test). As per Clauset et al. [1] the hypothesis (that the data comes from a power law) is rejected if the pvalue is less than 0.1. As we have observed (Figs. 4, 9), the distribution of avalanche sizes appears to exhibit partial scalefree behaviour for low levels of external input ($h=0.1/N,0.01/N$) and the IAI distribution appears scalefree over a range of scales for $h=1/N$. As in the companion paper [24], we fit a truncated powerlaw distribution up to an avalanche size of ${x}_{\mathrm{max}}=\frac{9}{10}N$ in the case of avalanche size distributions. We fit a powerlaw distribution without truncation to the IAI distribution. Testing the entire avalanche size distributions (consisting of over 900,000 avalanches) yielded $p=0$ indicating that the hypothesis that the distribution follows a power law should be rejected. Similarly, taking the IAI distribution for $h=1/N$, testing the whole distribution of over 6,000,000 IAIs (note that there are more avalanches and therefore IAIs with larger h due to the higher firing rate) yielded $p=0$. Testing instead the first 100,000 avalanches (a similar order of magnitude to the number of neuronal avalanches tested experimentally) with $h=0.1/N$ yielded $p=0.46$ indicating instead that the powerlaw hypothesis should not be rejected. Similarly, for $h=0.01/N$ testing the first 10,000 avalanches yielded $p=0.13$. These results are similar to those of the companion paper, where the powerlaw hypothesis was not rejected when the number of avalanches included in the distribution was of the same order as those tested experimentally, and they are indicative of the partial scalefree behaviour of the system in proximity to the critical regime.
In the case of the IAI distribution testing the first 100,000 IAIs yielded $p=0.44$ indicating that a power law is a good fit to the data. Given that in this case we know that the IAI distribution is not a power law (and is in fact a weighted sum of hypoexponentials), it is interesting to note that the hypothesis that the data follows a power law is not rejected when the number of data points is of the same order as that which have been tested experimentally, an observation that will be explained in the Discussion. When the powerlaw hypothesis is not rejected, Clauset et al. [1] employ a model selection process to determine the best model for the data. We did not carry out this testing here (as, at least in the case of the IAI distribution, we already know what the distribution is) and it may be that such a process would suggest that a power law is not the best fit to the data. However, the results here (and those of the companion paper) are indicative of the partial scalefree behaviour exhibited by the system in the region of the critical regime.
2.5 LongRange Temporal Correlations
As discussed in the Introduction, longrange temporal correlations are another possible signature of a system at (or near) a critical state and have also been observed in neurophysiological data [4, 11–15]. It is therefore of interest to determine whether this finitesize neuronal system with external input displays LRTCs—given that it is in the region of a critical regime—and whether LRTCs relate to other signatures of criticality, i.e. the presence of partial scalefree behaviour in the data distributions themselves. The latter is of particular interest given that we have seen a change in distributions as the system approaches the critical regime. As within any single simulation the level of external input is constant (and so does not itself display LRTCs) it is important to note at the outset that any LRTCs present in the dynamics of the system would be intrinsic to the system. Furthermore, the appearance of a power law within the distribution of any data set does not imply that the data will exhibit LRTCs and vice versa. (Consider points drawn at random from a powerlaw distribution—such a data set would not exhibit LRTCs.)
In neurophysiological data, LRTCs have been observed in fluctuations of oscillation amplitude (i.e. within continuous data) [4, 11–15] and also in discrete burst activity in our recent analysis of the interevent intervals of bursts of nested oscillations in EEG recordings of extremely preterm human neonates [35]. Moreover, LRTCs in discrete data has previously been investigated by Peng et al. [44] and a number of other authors, for example [45–47], in their analysis of interheartbeat intervals. As the data from the model analysed here is discrete avalanche activity, we follow the approach of this previous analysis of LRTCs in discrete data, examining LRTCs in waiting times, i.e. in IAIs.
We assessed the presence of LRTCs in IAIs through estimating the Hurst exponent, H, which describes the degree of selfsimilarity within the data. A Hurst exponent of $H=0.5$ indicates that there are no correlations in the data or shortrange correlations only, for example a white noise process, whereas a Hurst exponent of $0.5<H<1.0$ indicates LRTCs in the data. Additionally, an exponent of 1 corresponds to $1/f$ noise [44]. We estimated the Hurst exponent using detrended fluctuation analysis (DFA)—an approach that has been shown to produce more accurate estimates of the Hurst exponent than some other approaches [48] and has been used previously to assess the presence of LRTCs in neurophysiological data sets [4, 11, 15, 35]. DFA is a graphical method whereby the average root mean square fluctuations across a box size are compared across different box sizes and the gradient of the line of bestfit is the estimate of the Hurst exponent (for more detailed methodology see Peng et al. [44, 49]). We used a minimum box size of 5, with 50 box sizes linearly spaced on a logarithmic scale up to a maximum box size of $1/10$ of the length of the IAI sequence [50]. Calculations were carried out using the MATLAB code of McSharry [51].
When examining the presence of LRTCs it is standard practice to compare the exponent of the actual data to the exponent of the data randomly shuffled [4]. By shuffling the data this should destroy any correlations present and so the exponent of the shuffled data is expected to be approximately 0.5. We compared the original sequence (whose DFA plot is shown in Fig. 11) with 500 shuffled sequences. The DFA plots for the shuffled sequences (data not shown) did not exhibit crossover points, with the same linear trend being observed across all box sizes. The mean exponent of the shuffled sequences was 0.50 with a range of 0.48–0.52. Therefore, as the exponents of the original sequence (at smaller box sizes) do not fall within the distribution of exponents for the shuffled sequences this further demonstrates that the original sequence exhibits complex temporal ordering with correlations that extend across a range of box sizes (up to the upper crossover).
2.5.1 Increasing the System Size
Next we considered whether the distributions of IAIs and avalanche size themselves changed with system size and whether the change in the correlation length observed above was reflected in a change in the distributions. Figure 12 also shows the IAI and avalanche size distributions for different network sizes. In both cases, the distributions for different system sizes have only small changes which can be accounted for by noise. Thus, as the system approaches the critical regime through increasing the system size there does not appear to be a change in the distributions despite the change in the temporal correlations. Moreover, LRTCs are present in the data but the distribution of avalanche sizes does not exhibit scalefree behaviour, i.e. these markers of criticality do not occur simultaneously in this case. By contrast, through approaching the critical regime by lowering the external input we have shown that the avalanches are more distinct and the distribution of avalanche sizes exhibits partial scalefree behaviour.
2.5.2 The Effect on LRTCs of Decreasing the Level of the External Input
We also examined the DFA exponents at the lower levels of external input ($h=0.1/N$ and $h=0.01/N$). In both cases there were no crossover points with a single linear trend across all box sizes—data not shown. The exponents were 0.50 (range 0.49–0.51, across 10 simulations with $N=800$) and 0.56 (range 0.55–0.57) for $h=0.01/N$ and $h=0.1/N$, respectively. Thus, at the lowest level of external input the IAIs do not exhibit LRTCs and there is a slight increase in the exponent as the external input increases. This suggests that as the system approaches the critical regime through a decrease in the external input the temporal correlations are lost. Thus, the existence of LRTCs as the system approaches the critical regime is dependent on how the critical regime is approached, i.e. the region of parameter space—approaching the critical regime through increasing the system size extends the temporal correlations whereas decreasing the external input leads to a loss of longrange correlations. Moreover, this signature of criticality is independent from the other marker we have investigated—the presence of scalefree behaviour in the avalanche size distribution. Considering avalanche size and duration scalefree behaviour is present for the lowest level of external input, at which point LRTCs are lost. Thus, we find that markers of criticality are not only dependent on the region around the critical regime but also may not be present for the same parameter set.
3 Discussion
 1.
As the system approaches the critical regime through a reduction in the external input the avalanches become more distinct and the distribution of avalanche sizes displays scalefree behaviour.
 2.
With $h=1/N$ the IAIs exhibit temporal correlations which extend across a range of bin sizes to an upper crossover. As the system approaches the critical regime through increasing the system size the length of the temporal correlations is extended across a wider range of bin widths. These correlations (one noted signature of a critical system) are observed despite the fact that the distribution of avalanche sizes does not exhibit scalefree behaviour and does not change with the increase in system size. These temporal correlations are lost if the critical regime is instead approached through reducing the external input.
 3.
The distribution of IAIs was theoretically derived and was shown to be a weighted sum of hypoexponentials. However, for $h=1/N$ (when the number of avalanches considered was of the same order as those tested experimentally) the hypothesis that the IAI distribution follows a power law was not rejected by statistical testing indicating the scalefree nature of the distribution at this level of the external input.
3.1 Validity of the Model
The model considered in this paper was a highly simplified neuronal system with a number of assumptions, such as equally weighted synapses and continuous constant external input. These assumptions were necessary in order to analytically tune the system to be in the region of a critical regime. Therefore, while this should not be taken as an accurate model of a neuronal system it is important that we first consider models such as this, examining markers of criticality, which will then aid our understanding when building on this work with more complex models. This paper opens the way for future work examining the role of external input on signatures of criticality and the importance of the region of parameter space on network dynamics. Future work should also investigate the effect of topology on the dynamics [29, 52] and the effect of external input with different temporal and spatial characteristics.
3.1.1 Purely Excitatory Synaptic Transmission
The synaptic connections investigated in this model were purely excitatory. This not only simplifies the model for analytical investigations but is also of interest from a neurological perspective in terms of early brain development. Before birth, GABA is thought to have a depolarising effect on postsynaptic neurons and it is not until the nervous system reaches a more mature state that this neurotransmitter becomes inhibitory [53, 54]. While presynaptic inhibition is thought to be present at all developmental stages [55] this effect can be considered to be taken into account in the model by the fact that neurons cannot refire until they have returned to the quiescent state (i.e. inhibition in the model relates to the rate α at which neurons return to the quiescent state). We have recently shown that EEG recordings from very preterm infants (when GABA is still thought to be purely excitatory) exhibit LRTCs in the temporal occurrence of bursts of activity [35]. The model studied here may be a candidate mechanism for the generation of this temporal patterning in the discontinuous activity of the developing brain. Moreover, it is interesting to note that despite the fact that the system has purely excitatory postsynaptic connections and input, for these parameter regions, the model does not exhibit runaway excitation (saturation) but is able to maintain stable dynamics through the ‘balance’ of individual neuronal dynamics resulting from a tradeoff between the rates at which neurons become active and quiescent. Indeed, while a number of authors have suggested that a balance of excitation and inhibition in neuronal networks leads to critical behaviour [56], the work here and in the companion paper shows that excitatory networks (i.e. networks without inhibitory neurons) can display the same behaviour. It can be speculated that this type of balanced activity in the region of a critical regime might be a way in which the brain avoids (for the most part) epileptic behaviour during early development, although it can also be argued that the decay rate contributes to a “balance condition” between excitation and inhibition [37].
3.1.2 The Activation Function
Here we used a linear activation function for the transition of neurons from the quiescent to the active states. However, physiologically neurons behave more like a saturating function. The linear activation function used here was chosen so as to be analytically tractable and is also equivalent to a saturating function when input is small. However, considering instead a saturating function (see Appendix C) we found that the dynamics in the region of the critical regime shows similar behaviour to the system with the linear activation function.
With both the linear and saturating activation functions, the critical regime can only be reached exactly in the absence of external input. A positive external input therefore drives the system away from the critical regime. However, with a quadratic activation function (see Appendix C) the system, with a positive external input, has a critical fixed point and the system can be tuned directly to this regime. With this activation function the dynamics does not appear to exhibit burst like behaviour, however, analysis shows that the activity fluctuates about the critical regime in an ‘avalanchelike’ manner. Thus, while a quadratic function does not best describe activation in a neuronal network, we may further conclude that signatures of criticality are not universal and can be examined only in relation to the specific critical regime of the system (see Appendix C).
3.1.3 The Binning Approach
As described previously, the binning method separated avalanches where the time difference between consecutive spikes was greater than the average time difference between consecutive spikes across the entire simulation. This was the approach taken by Benayoun et al. [38]. However, it is worth noting that this is a slightly different approach to the method that has been used experimentally to separate neuronal avalanches—first proposed by Beggs and Plenz [5, 6]. In their analysis neuronal firing is distributed into bins of width of the average time difference between consecutive spikes (δt) and firing is separated into avalanches by bins in which no firing occurs. Thus, two spikes may be greater than the average time difference δt apart but still remain in the same avalanche if they fall within consecutive bins. The theoretical derivation of the IAI distribution relied on the fact that all consecutive active to quiescent transitions or single quiescent to active transitions with a length greater than the average time between two spikes is an IAI. This would not be the case if the alternative (Beggs and Plenz) binning approach was used to determine avalanches. If this alternative approach had been used the distributions of consecutive active to quiescent transitions and single quiescent to active transitions would be the same, but transitions of length slightly greater than or equal to the average time between consecutive spikes (in fact up to twice this average) may or may not form part of the IAI distribution depending on the exact binning. It is also important to note that with the binning method used here, even with dense neuronal firing (which occurs if the external input is increased from the levels studied here), as there is always an average time between consecutive spikes it is always possible to separate the dynamics into ‘avalanches’.
Additionally, both these binning approaches differ from that used in nondriven systems such as the classical sandpile model [18] and the system investigated in the companion paper to this article [24]. In those models an avalanche consists of all firing until the system returns to the fully quiescent state and so, for example, the system may have a long period without firing in which neurons switch to the inactive state but this will not be designated as two separate avalanches (if the system has not returned to the fully quiescent state) even when the period exceeds the average difference between consecutive spikes. Future work is needed to fully investigate how the differences in these avalanche definitions affect the distributions of size, duration and IAIs and care needs to be taken when interpreting the results from these different approaches.
3.1.4 Validity of DFA and the Investigation of LRTCs
DFA is one method by which to estimate the Hurst exponent and was chosen here as it has been shown to be an accurate estimate [48]. Moreover, it is a graphical approach and so can be used to check for crossover points [50]. As the Hurst exponent can only be estimated it is considered to be best practice to check the consistency of the exponents using two methods [57]. However, as nongraphical methods only give single numerical values they cannot be interpreted when crossover behaviour exists. Given that there were crossover points we only considered DFA with this analysis.
Crossover points within a DFA plot have been shown to exist when the same correlations do not extend across the whole data sequence in analytically constructed data [50]. The crossover points in the data here can therefore be interpreted in this way, as points at which the correlations in the sequences change. It is important to understand that these crossover points (and box sizes in general) relate to the sequence length. For example, a box size of 10 indicates detrending across 10 consecutive IAIs. As the IAIs themselves can be of variable length the box size does not relate to a particular simulation time. Future investigation is needed to determine the relationship between the model and crossover points.
Correlations extended across a range of box sizes with this range extending as the system size increased and the system approached the critical regime. It appears that correlations would extend across an infinite box size in the limit of system size. Thus, as the critical regime is approached in this way, this signature of a critical system emerges. LRTCs have been demonstrated previously in discrete neurophysiological data, in the waiting times of burst activity in cultures [34] and in the bursts of activity recorded using EEG in very preterm human neonates [35]. To our knowledge, waiting times of neuronal avalanches have yet to be examined in this way. However, such a study would provide an additional link between studies on the neuronal scale and studies on a wider network scale for which LRTCs have been observed in the fluctuations of oscillation amplitude. Palva et al. demonstrated strong correlations between powerlaw exponents of avalanche size distributions and exponents of LRTCs in fluctuations of oscillation amplitude in human MEG recordings [33]. Recent computational work also demonstrated a link between neuronal avalanches on the one scale and LRTCs on a wider temporal scale and the authors called for future work in this area [32]. However, the authors of this study did not investigate LRTCs in the waiting times of the avalanches themselves. Interestingly, in the model studied here, LRTCs were observed when $h=1/N$ but not for lower levels of external input. Thus, they were not observed when the avalanche size distribution exhibited scalefree behaviour—the type of distribution observed for avalanches recorded in vivo and in vitro [5, 6, 9]. It would therefore also be interesting to assess whether altering the driving force experimentally in vitro would lead to the types of dynamics (LRTCs) observed here.
3.2 Partial ScaleFree Behaviour in Avalanche Size
Statistical testing of the avalanche size distribution (with $h=0.1/N,0.01/N$) did not reject the hypothesis that the distribution followed a power law when the number of points within the distribution was of the order of the number of avalanches recorded in the experimental setting. Only with larger numbers of avalanches was the hypothesis that the distribution is a powerlaw rejected. This is to be expected—as has been discussed by Klaus and Plenz [2], when a distribution deviates from the expected distribution by more than noise from sampling then given a large enough number of samples the powerlaw hypothesis will eventually be rejected. The fact that the powerlaw hypothesis was not rejected for lower numbers of avalanches demonstrates the partial scalefree behaviour of the system in the region of the critical regime. Moreover, this highlights the fact that stringent statistical testing, such as this, with high sampling may lead to rejecting the powerlaw hypothesis and so rejecting the criticality hypothesis even when the system is critical.
3.3 Waiting Times
In addition to increasing the physiological realism of the model, investigating the driven system also has the advantage of producing waiting times (in this case termed IAIs). In the companion paper the simple reseeding of the network with a neuron set to the active state implied that there was no waiting times between avalanches. Other authors have reseeded by increasing the membrane potential but stipulated that neurons must reach a threshold for them to become active (and a new avalanche to start) [18, 25]. This does lead to waiting times, however, these are not the same as the waiting times investigated in this model which are intrinsic to the network dynamics rather than as a result of network reseeding.
Recent work by Lombardi et al. [31] showed that the waiting times between neuronal avalanches recorded in vitro have a distribution with an initial powerlaw regime. The authors suggest that the shape of the distribution relates to up and down states within the network (which exhibit critical and subcritical dynamics, respectively) and are able to reproduce the nonmonotonic waiting time distribution in a computational model in which neurons switch between up and down states depending on shortterm firing history. Interestingly, the distribution they observe is similar to the IAI distribution for the system with $h=0.1/N$, see Fig. 9(b), which also has a scalefree initial regime albeit over a shorter range to that presented by Lombardi et al. It is therefore possible that the waiting time distribution observed experimentally fits with the model constructed here. It would be interesting to investigate whether a change in input to the network in vitro alters the distribution in a similar way to those distributions seen in Fig. 9.
Additionally, for different parameter ranges different distributions were observed, in the IAI distribution as well as the distributions of avalanche size and duration. This leads us to the important conclusion that powerlaw distributions will not necessarily be displayed by systems in the region of a critical regime. Therefore, this work suggests that the absence of a power law in experimental data should not necessarily be taken to conclude that the system does not lie in the region of a critical regime. This was also seen in the companion paper where it was shown that despite being analytically tuned to the critical state (without the presence of external input) the avalanche size distribution was not a power law although it did exhibit partial scalefree behaviour. The fact that the system may not exhibit power laws when close to (or at) the critical regime is an important finding given that the system is of finite size as will be the case in the experimental setting. This highlights the necessity of examining other markers of criticality before conclusions about the critical nature of a system can be drawn.
3.4 Dynamic Range and Power Laws
Coinciding with results from previous authors [16, 29] we showed that the system exhibits optimal dynamic range when the branching parameter is equal to one. When calculating the dynamic range of a system, we emphasised that this value was dependent on the critical state of the system calculated when there was zero external input. We have shown that tuning a system to this critical point but then driving it with different levels of external input has considerable effect on the distribution of avalanche sizes. For nonzero h the corresponding ODE would, in the strictest sense, not be considered critical. Importantly, however, tuning to the critical point of the system with zero external input, maximises the dynamic range.
Dehghani et al. [58] showed that in vivo (contradictory to the results of Petermann et al. [9] and Hahn et al. [8]) avalanches were not well approximated by power laws, but they were more likely to approach exponential distributions. They contrast this with the evidence that the brain is operating at criticality from in vitro studies [5, 59] where avalanches are well approximated by power laws. Here we argue that external input and functional benefits [17] such as dynamic range, information transmission and information capacity, provide an interesting possibility as to the reason why in vivo and in vitro studies could potentially give different results. The critical brain hypothesis demands that in isolation from its natural surroundings (in vitro) and whilst having no external influences acting upon it (akin to the model with $h=0$ we studied in the companion paper [24]), a culture should exhibit signs that it is tuned to criticality (i.e. avalanches that are well approximated by power laws). However, when observed in vivo, and thus with external inputs acting upon it, a critical brain may no longer exhibit avalanches approximated by power laws but would instead optimise functional benefits such as the dynamic range and information transmission [17]. In our model we have shown that tuning the parameters to the critical regime does indeed maximise the dynamic range, but it is the level of external input that dictates whether the avalanche distributions exhibit partial scalefree behaviour. For this reason, avalanches recorded in vivo that lacked a powerlaw distribution would not be contradictory to criticality but instead an expected result. This further supports our suggestion in the companion paper [24] that future work should shift focus away from characterising avalanche distributions to more appropriate metrics.
3.5 Two Routes to Criticality
In this paper we examined two different parameter changes such that the system approaches the critical state: increasing the system size and lowering the overall level of the external input. Despite the fact that in both cases the critical regime is approached, the dynamics and the signatures of criticality observed are different. With increasing system size the temporal correlations extend across a wider range. However, the distributions of the avalanche characteristics remain the same and the distribution of avalanche size does not exhibit scalefree behaviour. By contrast, for lower overall levels of the external input the distributions of avalanche size and duration do exhibit partial scalefree behaviour. However, in this case as the critical regime is approached the temporal correlations in the avalanches are lost. At these lower levels of the external input we also observe a greater separation of the avalanches suggesting that the avalanches have less of an influence on each other which would explain this loss of LRTCs. Thus, as the system approaches the critical state in two different regions of the parameter space the dynamical properties of the system are very different. Significantly, this implies that not just the critical state alone but the region around the critical regime is an important factor in the system’s dynamics.
In conclusion, we have shown here and in the companion paper that in a finitesize neuronal system in the region of a critical regime the distributions of avalanche attributes need not be a power law. The current assumption in the literature is that powerlaw dynamics implies criticality and vice versa that systems without powerlaw dynamics are not in the region of a critical regime, however, the results here suggest that this assumption need not be true. Moreover, we found that longrange temporal correlations and scalefree distributions are not dependent on proximity to the critical regime alone but on the region of the parameter space. The results further highlight the need for future work examining the type of dynamics we might expect from such systems.
Appendix A: Dynamic Range
Whilst [16] and [29] consider a discrete model where multiple events can happen per time step, here we show analytically that our continuous model will exhibit the same maximisation of the dynamic range when ${R}_{0}=1$. Here we use the calculation of ${R}_{0}$ for a system where there is no external input ($h=0$) and thus ${R}_{0}=w/\alpha $.
We note that in [16, 29], the logarithm of this is taken but as the logarithm is an increasing function it is unnecessary to scale in this way for the result we obtain. Whilst using ${F}_{0.1}$ and ${F}_{0.9}$ is the standard for calculating the dynamic range these values are somewhat arbitrary [16] and can be generalised to ${k}_{1}$ and ${k}_{2}$, respectively. To calculate the dynamic range analytically we consider the two regimes of ${R}_{0}$, firstly ${R}_{0}\le 1$ and secondly ${R}_{0}>1$.
A.1 Maximum of $\mathrm{\Delta}({R}_{0})$
Calculating the derivative of $\mathrm{\Delta}({R}_{0})$ we find that if $0<{k}_{1}<{k}_{2}<1$, then for ${R}_{0}\le 1$, $\frac{d\mathrm{\Delta}}{d{R}_{0}}>0$, whilst for ${R}_{0}>1$, $\frac{d\mathrm{\Delta}}{d{R}_{0}}<0$. Thus, there is a critical point at ${R}_{0}=1$ where the maximum of $\mathrm{\Delta}({R}_{0})$ is achieved—see Fig. 1. It is worth noting that $\mathrm{\Delta}({R}_{0})$ is independent of N and only depends on the choice of ${k}_{1}$ and ${k}_{2}$.
Appendix B: Driving the System from a Subcritical and Supercritical State
Throughout the paper we have examined parameters such that the system is critical when there is no external input. In the presence of a small external input we therefore investigate driving the system in the region of this critical state. In the companion paper [24], with no external input, we also investigated the system with subcritical and supercritical parameters. In this appendix we briefly examine the dynamics of the system as it is driven from these states by an external input.
Appendix C: Altering the Activation Function
Throughout this paper we considered a linear activation function. What happens if a different activation function is chosen? Do we observe the same type of dynamics? In this appendix we briefly investigate two other activation functions: an exponential and a quadratic.
which defines the level of the external input at the critical fixed point.
which define the parameter space and the value of the fixed point for which a critical fixed point can be obtained. Thus, we find that unlike the model with the linear (and saturating) activation function, here with a nonzero external input it is possible to tune the system so that it is directly at the critical regime.
Upon examining this parameter space one can note that in many cases there also exists a stable (positive) fixed point as well as the critical fixed point. From simulating such a system we found (data not shown) that the dynamics of the system is quickly attracted to the stable fixed point and so the critical fixed point has little affect on the dynamics. Therefore, to have a system which is affected by a critical fixed point in the presence of a nonzero external input (in the case of this activation function and where positive parameters are required) the critical regime must be the only fixed point of the system. Given that $g(A)$ is a cubic equation, to achieve a single fixed point which is critical this point must be an inflection point with ${g}^{\prime}(A)=0$ and ${g}^{\u2033}(A)=0$. From these equalities we find that the critical fixed point is $A=N/3$ and we must also have $h=\frac{wN}{27}$ and $\alpha =\frac{8wN}{27}$.
Thus, while critical dynamics may not be apparent initially when examining data (for example if we were to look at the overall dynamics from the simulations with quadratic activation function), we can observe signatures of criticality when the dynamics is examined in relation to the known critical regime. Here we can note that the network firing fluctuates about the critical regime—that is, the number of active neurons fluctuates about this regime and so the average number of active neurons across the course of a simulation is approximately equal to the critical state of $N/3$. It might therefore be interesting to examine the fluctuations about the mean activity level in experimental settings where activity is continuous (i.e. cannot be described as intermittent avalanchelike activity) to determine whether signatures of criticality are present. Indeed, such an approach has been taken previously to examine MEG data, thresholding at the median level [60].
Declarations
Acknowledgements
Caroline Hartley is funded through CoMPLEX (Centre for Mathematics and Physics in the Life Sciences and Experimental Biology), University College London. Timothy Taylor is funded by a PGR studentship from MRC, and the Departments of Informatics and Mathematics at University of Sussex. Istvan Z. Kiss acknowledges support from EPSRC (EP/H001085/1). Simon Farmer acknowledges support from the National Institute for Health Research University College London Hospitals Biomedical Research Centre.
Authors’ Affiliations
References
 Clauset A, Shalizi CR, Newman MEJ: Powerlaw distributions in empirical data. SIAM Rev 2009, 51(4):661–703. 10.1137/070710111MATHMathSciNetView ArticleGoogle Scholar
 Klaus A, Yu S, Plenz D: Statistical analyses support power law distributions found in neuronal avalanches. PLoS ONE 2011., 6(5): Article ID e19779Google Scholar
 Chialvo DR: Emergent complex neural dynamics. Nat Phys 2010, 6(10):744–750. 10.1038/nphys1803View ArticleGoogle Scholar
 LinkenkaerHansen K, Nikouline VV, Palva JM, Ilmoniemi RJ: Longrange temporal correlations and scaling behavior in human brain oscillations. J Neurosci 2001, 21(4):1370–1377.Google Scholar
 Beggs JM, Plenz D: Neuronal avalanches in neocortical circuits. J Neurosci 2003, 23(35):11167–11177.Google Scholar
 Beggs JM, Plenz D: Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. J Neurosci 2004, 24(22):5216–5229. 10.1523/JNEUROSCI.054004.2004View ArticleGoogle Scholar
 Gireesh ED, Plenz D: Neuronal avalanches organize as nested theta and beta/gammaoscillations during development of cortical layer 2/3. Proc Natl Acad Sci USA 2008, 105(21):7576–7581. 10.1073/pnas.0800537105View ArticleGoogle Scholar
 Hahn G, Petermann T, Havenith MN, Yu S, Singer W, Plenz D, Nikolic D: Neuronal avalanches in spontaneous activity in vivo. J Neurophysiol 2010, 104(6):3312–3322. 10.1152/jn.00953.2009View ArticleGoogle Scholar
 Petermann T, Thiagarajan TC, Lebedev MA, Nicolelis MAL, Chialvo DR, Plenz D: Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proc Natl Acad Sci USA 2009, 106(37):15921–15926. 10.1073/pnas.0904089106View ArticleGoogle Scholar
 Shriki O, Alstott J, Carver F, Holroyd T, Henson RNA, Smith ML, Coppola R, Bullmore E, Plenz D: Neuronal avalanches in the resting MEG of the human brain. J Neurosci 2013, 33(16):7079–7090. 10.1523/JNEUROSCI.428612.2013View ArticleGoogle Scholar
 LinkenkaerHansen K, Nikulin VV, Palva JM, Kaila K, Ilmoniemi RJ: Stimulusinduced change in longrange temporal correlations and scaling behaviour of sensorimotor oscillations. Eur J Neurosci 2004, 19: 203–211. 10.1111/j.14609568.2004.03116.xView ArticleGoogle Scholar
 Nikulin VV, Brismar T: Longrange temporal correlations in alpha and beta oscillations: effect of arousal level and testretest reliability. Clin Neurophysiol 2004, 115(8):1896–18908. 10.1016/j.clinph.2004.03.019View ArticleGoogle Scholar
 Nikulin VV, Brismar T: Longrange temporal correlations in electroencephalographic oscillations: relation to topography, frequency band, age and gender. Neuroscience 2005, 130(2):549–558. 10.1016/j.neuroscience.2004.10.007View ArticleGoogle Scholar
 Smit DJA, de Geus EJC, van de Nieuwenhuijzen ME, van Beijsterveldt CEM, van Baal GCM, Mansvelder HD, Boomsma DI, LinkenkaerHansen K: Scalefree modulation of restingstate neuronal oscillations reflects prolonged brain maturation in humans. J Neurosci 2011, 31(37):13128–13136. 10.1523/JNEUROSCI.167811.2011View ArticleGoogle Scholar
 Berthouze L, James LM, Farmer SF: Human EEG shows longrange temporal correlations of oscillation amplitude in Theta, Alpha and Beta bands across a wide age range. Clin Neurophysiol 2010, 121(8):1187–1197. 10.1016/j.clinph.2010.02.163View ArticleGoogle Scholar
 Kinouchi O, Copelli M: Optimal dynamical range of excitable networks at criticality. Nat Phys 2006, 2(5):348–352. 10.1038/nphys289View ArticleGoogle Scholar
 Shew WL, Plenz D: The functional benefits of criticality in the cortex. Neuroscientist 2013, 19: 88–110. 10.1177/1073858412445487View ArticleGoogle Scholar
 Bak P, Tang C, Wiesenfeld K:Selforganized criticality: an explanation of the $1/f$ noise. Phys Rev Lett 1987, 59(4):381–384. 10.1103/PhysRevLett.59.381MathSciNetView ArticleGoogle Scholar
 Jensen HJ: Selforganized Criticality: Emergent Complex Behaviour in Physical and Biological Systems. Cambridge University Press, Cambridge; 1998.View ArticleGoogle Scholar
 Essam J: Percolation theory. Rep Prog Phys 1980, 43: 833–912. 10.1088/00344885/43/7/001MathSciNetView ArticleGoogle Scholar
 Harris TE Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen. In The Theory of Branching Processes. Springer, Berlin; 1963.View ArticleGoogle Scholar
 Priesemann V, Munk MHJ, Wibral M: Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC Neurosci 2009., 10: Article ID 40 Article ID 40Google Scholar
 Bonachela JA, Muñoz MA: Selforganization without conservation: true or just apparent scaleinvariance? J Stat Mech 2009., 2009: Article ID P09009 Article ID P09009Google Scholar
 Taylor TJ, Hartley C, Simon PL, Kiss IZ, Berthouze L: Identification of criticality in neuronal avalanches: I. A theoretical investigation of the nondriven case. J Math Neurosci 2013., 3: Article ID 5 Article ID 5Google Scholar
 Levina A, Herrmann JM, Geisel T: Dynamical synapses causing selforganized criticality in neural networks. Nat Phys 2007, 3: 857–860. 10.1038/nphys758View ArticleGoogle Scholar
 Olami Z, Feder H, Christensen K: Selforganized criticality in a continuous, nonconservative cellular automaton modeling earthquakes. Phys Rev Lett 1992, 68(8):1244–1247. 10.1103/PhysRevLett.68.1244View ArticleGoogle Scholar
 Ribeiro TL, Copelli M: Deterministic excitable media under Poisson drive: power law responses, spiral waves, and dynamic range. Phys Rev E, Stat Nonlinear Soft Matter Phys 2008., 77(5 Pt 1): Article ID 051911Google Scholar
 Rubinov M, Sporns O, Thivierge JP, Breakspear M: Neurobiologically realistic determinants of selforganized criticality in networks of spiking neurons. PLoS Comput Biol 2011., 7(6): Article ID e1002038Google Scholar
 Larremore DB, Shew WL, Restrepo JG: Predicting criticality and dynamic range in complex networks: effects of topology. Phys Rev Lett 2011., 106(5): Article ID 058101Google Scholar
 Boffetta G, Carbone V, Giuliani P, Veltri P, Vulpiani A: Power laws in solar flares: selforganized criticality or turbulence? Phys Rev Lett 1999, 83(22):4662–4665. 10.1103/PhysRevLett.83.4662View ArticleGoogle Scholar
 Lombardi F, Herrmann HJ, PerroneCapano C, Plenz D, de Arcangelis L: Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys Rev Lett 2012., 108(22): Article ID 228703Google Scholar
 Poil SS, Hardstone R, Mansvelder HD, LinkenkaerHansen K: Criticalstate dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. J Neurosci 2012, 32(29):9817–9823. 10.1523/JNEUROSCI.599011.2012View ArticleGoogle Scholar
 Palva JM, Zhigalov A, Hirvonen J, Korhonen O, LinkenkaerHansen K, Palva S: Neuronal longrange temporal correlations and avalanche dynamics are correlated with behavioral scaling laws. Proc Natl Acad Sci USA 2013, 110(9):3585–3590. 10.1073/pnas.1216855110View ArticleGoogle Scholar
 Segev R, Benveniste M, Hulata E, Cohen N, Palevski A, Kapon E, Shapira Y, BenJacob E: Long term behavior of lithographically prepared in vitro neuronal networks. Phys Rev Lett 2002., 88(11): Article ID 118102Google Scholar
 Hartley C, Berthouze L, Mathieson SR, Boylan GB, Rennie JM, Marlow N, Farmer SF: Longrange temporal correlations in the EEG bursts of human preterm babies. PLoS ONE 2012., 7(2): Article ID e31543Google Scholar
 Hinrichsen H: Nonequilibrium critical phenomena and phase transitions into absorbing states. Adv Phys 2000, 49(7):815–958. 10.1080/00018730050198152MathSciNetView ArticleGoogle Scholar
 Buice MA, Cowan JD: Fieldtheoretic approach to fluctuation effects in neural networks. Phys Rev E, Stat Nonlinear Soft Matter Phys 2007., 75(5 Pt 1): Article ID 051919 Article ID 051919Google Scholar
 Benayoun M, Cowan JD, van Drongelen W, Wallace E: Avalanches in a stochastic model of spiking neurons. PLoS Comput Biol 2010., 6(7): Article ID e1000846 Article ID e1000846Google Scholar
 Cowan JD: Stochastic neurodynamics. Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems (NIPS) 1990, 62–69.Google Scholar
 Gillespie D: Exact stochastic simulation of coupled chemical reactions. J Phys Chem 1977, 81(25):2340–2361. 10.1021/j100540a008View ArticleGoogle Scholar
 Ross SM: Introduction to Probability Models. 10th edition. Academic Press, Amsterdam; 2010.MATHGoogle Scholar
 Scheuer E: Reliability of an m out of n system when component failure induces higher failure rates in survivors. IEEE Trans Reliab 1988, 37: 73–74. 10.1109/24.3717MATHView ArticleGoogle Scholar
 Amari S, Misra R: Closedform expressions for distribution of sum of exponential random variables. IEEE Trans Reliab 1997, 46: 519–522. 10.1109/24.693785View ArticleGoogle Scholar
 Peng CK, Havlin S, Stanley HE, Goldberger AL: Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos 1995, 5: 82–87. 10.1063/1.166141View ArticleGoogle Scholar
 Toweill DL, Kovarik WD, Carr R, Kaplan D, Lai S, Bratton S, Goldstein B: Linear and nonlinear analysis of heart rate variability during propofol anesthesia for shortduration procedures in children. Pediatr Crit Care Med 2003, 4(3):308–314. 10.1097/01.PCC.0000074260.93430.6AView ArticleGoogle Scholar
 Castiglioni P, Parati G, Di Rienzo M, Carabalona R, Cividjian A, Quintin L: Scale exponents of blood pressure and heart rate during autonomic blockade as assessed by detrended fluctuation analysis. J Physiol 2011, 589(Pt 2):355–369.View ArticleGoogle Scholar
 Ho KK, Moody GB, Peng CK, Mietus JE, Larson MG, Levy D, Goldberger AL: Predicting survival in heart failure case and control subjects by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics. Circulation 1997, 96(3):842–848. 10.1161/01.CIR.96.3.842View ArticleGoogle Scholar
 Taqqu M, Teverovsky V, Willinger W: Estimators for longrange dependence: an empirical study. Fractals 1995, 3(4):785–798. 10.1142/S0218348X95000692MATHView ArticleGoogle Scholar
 Peng CK, Buldyrev SV, Havlin S, Simons M, Stanley HE, Goldberger AL: Mosaic organization of DNA nucleotides. Phys Rev E, Stat Phys Plasmas Fluids Relat Interdiscip Topics 1994, 49(2):1685–1689.Google Scholar
 Hu K, Ivanov PC, Chen Z, Carpena P, Stanley HE: Effect of trends on detrended fluctuation analysis. Phys Rev E, Stat Nonlinear Soft Matter Phys 2001., 64(1 Pt 1): Article ID 011114Google Scholar
 McSharry P: DFA Matlab code. Systems Analysis, Modelling and Prediction Group, Department of Engineering Sciences, University of Oxford; 2005. [http://www.eng.ox.ac.uk/samp/software/cardiodynamics/dfa.m]. Last checked: 11 January 2011.Google Scholar
 Sporns O: The nonrandom brain: efficiency, economy, and complex dynamics. Front Comput Neurosci 2011., 5: Article ID 5 Article ID 5Google Scholar
 Cherubini E, Gaiarsa JL, BenAri Y: GABA: an excitatory transmitter in early postnatal life. Trends Neurosci 1991, 14(12):515–519. 10.1016/01662236(91)90003DView ArticleGoogle Scholar
 BenAri Y: Excitatory actions of gaba during development: the nature of the nurture. Nat Rev Neurosci 2002, 3(9):728–739. 10.1038/nrn920View ArticleGoogle Scholar
 Holmes GL, Khazipov R, BenAri Y: New concepts in neonatal seizures. NeuroReport 2002, 13: A3A8. 10.1097/0000175620020121000002View ArticleGoogle Scholar
 Shew WL, Yang H, Petermann T, Roy R, Plenz D: Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. J Neurosci 2009, 29(49):15595–15600. 10.1523/JNEUROSCI.386409.2009View ArticleGoogle Scholar
 Gao J, Hu J, Tung WW, Cao Y, Sarshar N, Roychowdhury VP: Assessment of longrange correlation in time series: how to avoid pitfalls. Phys Rev E, Stat Nonlinear Soft Matter Phys 2006., 73(1 Pt 2): Article ID 016117Google Scholar
 Dehghani N, Hatsopoulos NG, Haga ZD, Parker R, Greger B, Halgren E, Cash SS, Destexhe A: Avalanche analysis from multielectrode ensemble recordings in cat, monkey and human cerebral cortex during wakefulness and sleep. Front Physiol 2012., 3: Article ID 302. [http://www.frontiersin.org/fractal_physiology/10.3389/fphys.2012.00302/abstract] Article ID 302. [http://www.frontiersin.org/fractal_physiology/10.3389/fphys.2012.00302/abstract]Google Scholar
 Friedman N, Ito S, Brinkman BAW, Shimono M, DeVille REL, Dahmen KA, Beggs JM, Butler TC: Universal critical dynamics in high resolution neuronal avalanche data. Phys Rev Lett 2012., 108: Article ID 208102. [http://link.aps.org/doi/10.1103/PhysRevLett.108.208102] Article ID 208102. [http://link.aps.org/doi/10.1103/PhysRevLett.108.208102]Google Scholar
 Poil SS, van Ooyen A, LinkenkaerHansen K: Avalanche dynamics of human brain oscillations: relation to critical branching processes and temporal correlations. Hum Brain Mapp 2008, 29(7):770–777. 10.1002/hbm.20590View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.