A Formalism for Evaluating Analytically the CrossCorrelation Structure of a FiringRate Network Model
 Diego Fasoli^{1, 2}Email author,
 Olivier Faugeras^{1} and
 Stefano Panzeri^{2}
DOI: 10.1186/s134080150020y
© Fasoli et al.; licensee Springer. 2015
Received: 30 October 2014
Accepted: 21 February 2015
Published: 15 March 2015
Abstract
We introduce a new formalism for evaluating analytically the crosscorrelation structure of a finitesize firingrate network with recurrent connections. The analysis performs a firstorder perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations so obtained reveal previously unknown behaviors of recurrent firingrate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a meanfield description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.
Keywords
Functional connectivity Neural networks Firingrate network model Perturbative theory Stochastic systems Graph theory1 Introduction
The brain is a complex system whose information processing capabilities critically rely on the interactions between neurons. One key factor that determines interaction among neurons is the pattern of their anatomical or structural connectivity, namely the specification of all the synaptic wirings that are physically present between neurons. However, communication among neurons appears to change dynamically [1], suggesting the presence of notyet understood network mechanisms that modulate the effective strength of a given connection. Understanding how the functional connectivity of a neural network (i.e. the set of statistical dependencies among different neurons or neural populations [2]) depends upon the anatomical connectivity and is further modulated by other network parameters has thus become a central problem in systems neuroscience [3–8].
In this article we introduce a new formalism for evaluating analytically the structure of dependencies among neurons in the finitesize firingrate network with recurrent connections introduced in [9]. Although these dependencies are computed from neural activity in a number of ways [10], in most cases functional connectivity is inferred from computing the correlation among neurons or populations of neurons [2]. In this article, we therefore concentrate on computing the correlations among neurons in a firingrate network, although we also discuss how to compute, with the same formalism, also other measures of functional connectivity (Sect. 5).
To our knowledge, the problem of determining analytically the correlation structure of a neural network has been begun to be investigated systematically only recently. This is in part due to the new experimental insights into functional connectivity among cortical neurons [3–8], and in part due to the focus on many previous mathematical studies of neural networks on the meanfield approximation. This approximation exploits the fact that (under certain hypotheses) neurons become independent in the thermodynamic limit when the number of neurons N in the network goes to infinity. This kind of meanfield approximation has been developed by Sznitman [11–13], Tanaka [14–16], McKean [17, 18] and others. According to it, if the neurons are independent at time \(t=0\) (initial chaos), then in the thermodynamic limit this independence propagates to every \(t>0\).^{1} This phenomenon of propagation of chaos has been studied in different kinds of neural network models [19–22]. However, recent studies have begun to investigate the more interesting and rich structure of correlations arising in descriptions of networks dynamics beyond the thermodynamic limit. For example, new studies considered finitesize networks with excitatory and inhibitory populations, where the firing rates are determined by a linear response theory [23–25]. These studies included in the network sources of Poisson randomness in the spike times [23, 24], as well as randomness originating from normal white noise in the background for the membrane potentials [25]. Pioneer approaches [26] relied on estimating correlation by using a perturbative expansion around the thermodynamic limit in the inverse number of neurons in the network. The method was developed for binary neurons, where the sources of randomness were the transitions between the two states of each neuron and the topology of the synaptic connections, and a similar model was reintroduced recently in [27] for large networks. In [28] the author considered an alternative way to calculate correlations as a function of the inverse number of neurons (which is known as the linear noise approximation) and applied it to homogeneous populations of identical neurons with random fluctuations in the firingrates. In [29] the authors introduced a density functional approach adapted from plasma physics to study correlations in large systems, and applied it to a heterogeneous network of phase neurons with random initial conditions. Another effective approach is represented by large deviations theory. In [30–32] the authors considered a discretetime network of rate neurons, whose sources of randomness were background Brownian motions for the membrane potentials and normally distributed synaptic weights.
Building on these previous attempts to study network correlations including finitesize effects that go beyond the meanfield approximation, here we develop an approach based upon a firstorder perturbative expansion of the neural equations. We introduce randomness through three different sources: the background noise of the membrane potentials, their initial conditions and the distribution of the recurrent synaptic weights. These sources of variability are normally distributed and can be correlated, and their standard deviations are used as perturbative parameters. Using this formalism and this model, we quantify analytically how synaptic connections determine statistical dependencies at any order (not only at the pairwise level, as in previous studies) among different neurons. The technique developed in this article is general, but for simplicity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. A regular graph is a graph in which each vertex has the same number of neighbors, so this means that we consider networks where each neuron receives and makes the same number of connections. While this assumption is of course biologically implausible, it is sufficient to show interesting and nontrivial behaviors and will be relaxed to study more plausible connections in our future studies. We use this formalism to investigate in detail how the correlation structure depends on the strength of the external input to the network. We find that external input exerts profound and sometimes counterintuitive changes in the correlation among neurons: for example, a strong input can make the neurons almost independent. Moreover, we prove that in general it is not possible to find a meanfield description à la Sznitman of the neural network, due to the absence of chaos, if the anatomical connections are too sparse or our three sources of variability are correlated. This demonstrates the fairly limited range of applicability of the meanfield approximation. Finally, we also show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent.
This article is organized as follows. In Sect. 2 we describe the details of the firingrate network we use. We then develop a firstorder perturbative expansion (Sect. 3) that allows the approximate analytical calculation, for a generic anatomical connectivity matrix, of the membrane potentials and the firing rates of the network. (In this section we assume the reader to be familiar with stochastic calculus [33, 34].) Then we use this formula for the membrane potentials and the firing rates in Sect. 4 to calculate analytically the pairwise and higherorder correlation structure of the network and the joint probability distribution for both the membrane potentials and the firing rates. In Sect. 5 we briefly discuss how other measures of functional connectivity can be evaluated analytically through our theory. In Sect. 6 we specialize to the case of regular graphs and we investigate network dynamics using some explicit examples of anatomical connectivity. We start by considering relatively simple cases, in particular a blockcirculant graph with circulant blocks (Sect. 6.1) and a more general case of symmetric undirected graphs (Sect. 6.2). Then in Sect. 6.3 we conclude by showing how to extend the theory to highly complex regular graphs and by discussing also some possible extensions to irregular networks. In Sect. 7 we investigate the goodness of our perturbative approach by comparing it to the numerical simulation of the network’s equations. In Sect. 8 we show that the correlation structure depends dynamically on the external input of the network. In Sect. 9 we demonstrate with counterexamples that in general Sznitman’s meanfield approximation cannot be applied to the network in the case when the sources of randomness are correlated (Sect. 9.1) or when the anatomical connectivity matrix is too sparse (Sect. 9.2). In Sect. 10 we introduce the phenomenon of stochastic synchronization. Finally, in Sect. 11 we discuss the implications of our results as well as the strengths and limitations of our mathematical approach.
2 Description of the Model
This concludes our description of the neural equations, so now we have all the ingredients to develop a perturbative expansion of the system. This method is introduced in the next section, and will allow us to find a series of new results for the behavior of our stochastic neural network.
3 Perturbative Expansion
Now with these results we can determine analytically the behavior of the neural network, starting from its correlation structure and probability density, which are discussed in the next section.
4 CrossCorrelation and Probability Density
The aim of this section is to compute the statistical dependencies among the activity of different neurons.
We observe that computation of Eq. (4.8) leads to a combinatorial problem, whose complexity is related to n and to the structure of the effective connectivity matrix \(J^{\mathrm{eff}}\). To simplify matters, in Appendix B we will calculate the higherorder correlation for a generic n in the case of a complete graph (i.e. a fully connected network).
5 Other Measures of Functional Connectivity
In order to illustrate the generality of our approach, here we briefly describe how it can be extended to compute two other symmetric quantities commonly used to measure the functional connectivity, namely the mutual information and the magnitudesquared coherence [10, 37, 38].
We note that our formalism lends itself in principle also to the calculation of directed asymmetric measures of functional connectivity, such as those based upon transfer entropy [39, 40] or the Granger causality [41–43]. However, an analytical calculation of these directed quantities is beyond the scope of this article.
6 Examples
It is important to observe that (directed) regular graphs with uniform input satisfy the assumptions above, and for this reason they will be considered from now on, even if the hypothesis of regularity is not strictly required, since we do not need to have also the same number of outgoing connections for each neuron. We also observe that even if under our assumptions the neurons have the same \(\overline {J}_{ij}^{c}\), \(I_{i}^{c}\) (and, as a consequence, also the same \(\mu_{i}\)), this does not mean that they all behave identically. For example, from (4.11) we see that the mean of the membrane potentials is \(\overline{V}_{i} (t )=\mu+ \sum_{m=3}^{4} \sigma_{m}Y_{i}^{ (m )} (t )\) and that \(Y_{i}^{ (3,4 )}\) depend on \(\overline{J}^{v} (t )\) and \(I^{v} (t )\), which in general are not uniform. This proves that \(\overline{V}_{i} (t )\) depends on the index i, or in other terms that the neurons are not identical in this network.
To conclude, it is interesting to observe that if we choose \(\mathscr {A} (\mu )\) to be the algebraic activation function (see (2.2)), then Eq. (6.1) can be solved analytically, since it can easily be reduced to a fourthorder polynomial equation. Notwithstanding, in every numerical simulation of this article we will use the logistic activation function, since its properties are ideal for studying the phenomenon of stochastic synchronization introduced in Sect. 10. Now we are ready to start with the first example.
6.1 BlockCirculant Matrices with Circulant Blocks
Some interesting consequences of these formulas, for the complete graph and other kinds of topologies, will be analyzed in Sects. 8, 9, 10. However, before that, in the next section we want to show the effectiveness of this perturbative method by applying it to another class of topologies, that of symmetric connectivity matrices.
6.2 Symmetric Matrices
6.3 Examples with More Complex Connectivity Matrices
Now we briefly discuss some more complex examples of connectivity. In particular, in Sect. 6.3.1 we focus on examples of more complex regular graphs, while in Sect. 6.3.2 we relax the hypothesis of regularity.
6.3.1 Product of Regular Graphs
In Sects. 6.1 and 6.2 we showed some relatively simple examples of regular graphs. It is possible to build more complicated topologies by means of graph operations that transform a graph into another while allowing us to calculate easily the spectrum of the new graph from that of the old one. There are two main kinds of graph operations: unary and binary. An example of unary operation is the graph complement that transforms a graph \(\mathcal{G}\) into its complement \(\overline{\mathcal{G}}\), namely in the graph with the same vertices of \(\mathcal{G}\) and such that two distinct vertices of \(\overline{\mathcal{G}}\) are connected if and only if they are disconnected in \(\mathcal{G}\). For example, the complement of \(C_{4}\) is the disjoint union of two graphs \(K_{2}\). On the other side, binary operations create a new graph from two initial graphs \(\mathcal {G}\), \(\mathcal{H}\). In this section we discuss only graph products, namely a particular kind of binary operations that prove very useful for studying networks made of different interconnected populations.
6.3.2 Irregular Graphs
Up to now we have studied only regular graphs, because for this class it is possible to calculate easily the eigenquantities of \(\mathcal{J}\) from those of T by means of Eq. (6.2). In this section we show that this is not a strict requirement of our theory and that it can be applied also to irregular graphs. Regularity can be broken in two different ways: either by introducing nonuniform weights (since, by definition, regular graphs are unweighted), or by considering vertices with different (incoming or outgoing) degrees. In both cases we show how to calculate the eigenquantities of the Jacobian matrix in a relatively simple way.
Finally, this neural network can be extended to the case of multiple populations with different sizes and vertex degrees (of which a very special example is the complete kpartite graph, whose topology is generally irregular). The analysis is beyond the purpose of this work and will be developed in upcoming articles.
7 Numerical Comparison
Parameters used for the numerical simulations of Figs. 5 , 6 , 7 and the righthand side of Fig. 8 . For the lefthand side of Fig. 8 and for Fig. 9 the parameters are the same, with only the exception of \(\pmb{C^{ (0 )}}\) , \(\pmb{C^{ (1 )}}\) and \(\pmb{C^{ (2 )}}\) , which have been set to zero
Neuron  Initial conditions  Synaptic weights  External input  Logistic function 

τ = 1  Γ = 1  \(I^{c}=1\)  \(\nu_{\mathrm{max}}=1\) Λ = 1 \(V_{T}=0\)  
\(C^{ (0 )}=0.4\)  \(C^{(1 )}=0.5\)  \(C^{(2)}=0.6\) 
Figure 5 has been obtained for \(\sigma=0.1\) and it clearly shows that the membrane potential follows very closely its numerical counterpart, while for the correlation the difference between the numerical simulation and the perturbative formula is of order 10^{−2}. This is compatible with the law of large numbers, according to which the statistical error introduced by a Monte Carlo method with \(\mathcal{T}\) trials is of order \(\sqrt{\mathcal{T}}\).
The error ε% has been calculated as a function of the perturbative parameter, for \(\sigma=10^{3}1\). Since we want to take into account also the error introduced by the perturbative expansion with respect to the initial conditions, whose effect quickly vanishes due to the time constant τ, the error ε% has been calculated at a small time instant, namely \(t=1\). The result is shown in the lefthand side of Fig. 6, which confirms the goodness of the perturbative approximation, since the error is always smaller than 3.5% if calculated over \(10\mbox{,}000\) trials. ε% could be even smaller if \(\mathcal{T}\) is increased.
The righthand side of Fig. 6 shows the numerical evaluation of the probability \(\mathscr{P} (t )\) for \(t=1\) (see (4.13)) according to the algorithm introduced in [49]. From the figure it is easy to check that for \(\sigma=10^{3}1\) we obtain \(\mathscr {P}\approx1\), which further confirms the validity of our results.
8 Correlation as a Function of the Strength of the Network’s Input
In this section we consider how the crosscorrelation among neurons depends upon a crucial network parameter, namely the strength of the external input current \(I^{c}\). As explained above, \(I^{c}\) represents the external input to the network (for example, a feedforward input from the sensory periphery, or a topdown modulatory input) that drives or inhibits the activity of our network. Studying how the network properties depend on the parameter \(I^{c}\) is important for many reasons. From the mathematical and theoretical point of view, this is important because this parameter may profoundly affect network dynamics. For example, the input can change the dynamical behavior of the system from a stationary to an oscillatory activity, because the eigenvalues of the Jacobian matrix (3.9) depend on μ, which in turn is determined by \(I^{c}\) through Eq. (6.1). So changing \(I^{c}\) can transform real eigenvalues into imaginary ones (in nonsymmetric connectivity matrices) and therefore generate oscillations, or change the sign of the real part of an eigenvalue from negative to positive, giving rise to an instability. From the neural coding point of view, characterizing the dependence of different aspects of network activity upon the external input is necessary to understand and quantify how different aspects of network activity take part in the encoding of external stimuli [50–53]. Here we investigate specifically how the correlations among neurons depend on \(I^{c}\).
We first examined the case when the sources of variability are independent (left panels of Fig. 8), i.e. when \(C^{ (0 )}\), \(C^{ (1 )}\), and \(C^{ (2 )}\) are equal to zero. Considering (3.10), it is apparent that this behavior originates from the sigmoidal shape of the activation function: when \(\vert I^{c}\vert \) is large, then \(\vert \mu \vert \) is large as well, therefore \(\mathscr{A}' (\mu )\) and the entries of the effective connectivity matrix are small. In other words, the neurons become effectively disconnected, due to the saturation of the sigmoidal activation function. An important consequence of this phenomenon is that the neurons become independent, even if the size of the network is finite. This result holds for both the complete (topleft panel) and the hypercube graph (bottomleft panel of Fig. 8). An important implication of this result is that, taking into account that ν increases with \(I^{c}\), in general \(\operatorname{Corr} (\nu_{i} (t ),\nu_{j} (t ) )\) is not a monotonic function of the firing rate.
When the sources of variability are correlated, we found (for both network topologies; see right panels of Fig. 8) that the dependence of the correlation upon the parameter \(I^{c}\) was very different from the case of uncorrelated sources of variability. In this case, for both considered topologies, \(\operatorname {Corr} (\nu _{i} (t ),\nu_{j} (t ) )\) increases with the firing rate provided that the sources of randomness were sufficiently correlated and the network is large enough (see the case \(N=32\) in the right panels of Fig. 8).
9 Failure of Sznitman’s MeanField Theory
In this section we take advantage of our ability to study generic networks to investigate the ranges of applicability of Sznitman’s meanfield theory for the mathematical analysis of a neural network. A neural network is generally described by a large set of stochastic differential equations, which makes it hard to understand the underlying behavior of the system. However, if the neurons become independent, their dynamics can be described with the meanfield theory using a highly reduced set of equations that are much simpler to analyze. For this reason the meanfield theory is a powerful tool that can be used to understand the network. One of the mechanisms through which the independence of the neurons can be obtained is the phenomenon known as propagation of chaos [19–22]. Propagation of chaos refers to the fact that, if we choose chaotic initial conditions for the membrane potentials, then any fixed number of neurons are independent \(\forall t>0\) in the so called thermodynamic limit, namely when the number of neurons in the system grows to infinity. Therefore the term propagation refers to the “transfer” of the chaotic distribution of the membrane potentials from \(t=0\) to \(t>0\). Under simplified assumptions as regards the nature of the network (namely that the other sources of randomness in the system, in our case the Brownian motions and the synaptic weights, are independent), propagation of chaos does occur. However, in Sects. 9.1, 9.2 and 10 we show that in many cases of practical interest, e.g. for a system with either correlated Brownian motions, initial conditions and synaptic weights, or with a sufficiently sparse connectivity matrix, or with an arbitrarily large (but still finite) size, the correlation between pairs of neurons can be high. Therefore in general any fixed number of neurons are not independent, which invalidates the use of Sznitman’s meanfield theory for analyzing such networks.
9.1 Chaos Does not Occur if the Sources of Randomness Are not Independent

if \(C^{ (0 )},C^{ (2 )}\neq0\), then \(C^{ (1 )}=0\) does not imply \(\operatorname{Corr} (V_{i} (t ),V_{j} (t ) )=0\) (i.e. there is no propagation of initial chaos);

at every finite t, if \(C^{ (1 )}\neq0\), then \(C^{ (0 )},C^{ (2 )}=0\) does not imply \(\operatorname{Corr} (V_{i} (t ), V_{j} (t ) )=0\) (i.e. absence of initial chaos does not lead to chaos).
9.2 Propagation of Chaos Does not Occur in Sufficiently Sparse Networks
Again, we show this through a counterexample. Since in this section we are interested in sparse systems, we study propagation of chaos in the thermodynamic limit as a function of the number of connections in a circulant and blockcirculant network. To this purpose, we set \(C^{ (0 )}=C^{ (1 )}=C^{ (2 )}=0\) (see previous section). For \(N\rightarrow\infty\) and finite M, the righthand sides of equations in (6.7) do not converge to zero, therefore for every finite value of M propagation of chaos does not occur.
10 Stochastic Synchronization
Finally, we use our formalism to demonstrate a theoretically interesting regime of network dynamics. In particular, we show that for every finite and arbitrarily large number of neurons in the network, it is possible to choose special values of the parameters of the system such that, at some finite and arbitrarily large time instant, correlation is (approximately) equal to one. In other terms, the stochastic components of the membrane potentials become perfectly synchronized, therefore from now on we refer to this phenomenon as stochastic synchronization. This is a very counterintuitive behavior of the network, since it does occur even when all the sources of randomness are independent (namely \(C^{ (0 )}=C^{ (1 )}=C^{ (2 )}=0\)). It is important to observe that this phenomenon requires a precise tuning of the parameters of the network, which is really hard to find by chance through numerical simulations. For this reason we need a rigorous theory that tells us how to set the parameters: such a theory is developed in the next section.
10.1 The General Theory
It is interesting to observe that, due to the Perron–Frobenius theorem [54], if the matrix with entries \(\frac {1}{M_{i}}J_{ij}^{\mathrm{eff}}\) (see Eq. (3.9)) is nonnegative and irreducible (namely if its corresponding directed graph is strongly connected, which means that it is possible to reach each vertex in the graph from any other vertex, by moving on the edges according to their connectivity directions), then it has a unique largest positive eigenvalue, which can be used to generate stochastic synchronization.
To conclude, it is important to observe that we must be careful when we use the perturbative expansion to describe stochastic synchronization. Actually the divergence of the term \(e^{\gamma\widetilde{\lambda }_{\mathrm{max}}t}\) implies a fast growth of the variance of the membrane potential, therefore the firstorder approximation may not be good enough due to a possibly larger magnitude of the higherorder perturbative corrections. However, this problem can easily be fixed by choosing sufficiently small values of \(\sigma_{m}\) that ensure the variance is still small when the correlation is close to one. Another possibility is to choose the parameters of the network in order to have \(\widetilde{\lambda }_{\mathrm{max}}\) negative but very close to zero. For continuity, in this case correlation will be very close to one, and the variance cannot diverge since \(\widetilde{\lambda}_{\mathrm{max}}<0\).
Now we are ready to see an explicit example of stochastic synchronization, which will be developed in the next section for the complete and the hypercube graphs.
10.2 Examples: The Complete and the Hypercube Graphs
Moreover, from (B.1), it is interesting to observe that if there is a perfect stochastic synchronization between pairs of neurons, then it is “transmitted” to all the higherorder correlations with even order, at least for the complete graph. In other terms, if the neurons are alltoall connected, then \(\operatorname{Corr}_{2} (V_{i} (t ),V_{j} (t ) )=1\) implies \(\operatorname{Corr} _{n} (V_{i_{0}} (t ),\ldots ,V_{i_{n1}} (t ) )=1\), ∀n even.
11 Discussion
In this article we developed a novel formalism for evaluating analytically the crosscorrelation structure of a finitesize firingrate network with recurrent connections, using a firstorder perturbative expansion of the neural equations. Importantly, the network we considered is stochastic and includes three distinct sources of randomness, namely the background noise of the membrane potentials, their initial conditions and the distribution of the recurrent synaptic weights. With this approach we succeeded in calculating analytically correlations at any order among all groups of neurons in the network. This formalism is general and in principle can be applied to networks with any kind of topology of the anatomical connections, but here we applied it to the case of regular graphs. In upcoming articles this technique will be employed to study more general kinds of anatomical connections. In other terms, the present article represents a proof of concept of the ability of our theory to relate analytically the anatomical and functional connectivity.
The cases we have decided to study are networks with blockcirculant and hypercube topologies. Clearly some of the results we have obtained could be specific for these special graphs. Nevertheless, our formalism applied to these cases has shown a series of (to our knowledge) new results, whose generality or specificity can be later determined by comparison with other kinds of anatomical connections.
11.1 Dependence of the Correlation Structure on the Parameters of the System
First of all we quantified analytically how the correlation depends dynamically on the external input of the network. This has revealed a number of new and partly counterintuitive insights. We have shown that a strong input can make the neurons almost independent, and this reveals a simple mechanism to achieve network decorrelation that adds to those, such as the balance of excitation and inhibition (e.g. [27, 55]) or the use of purely inhibitory feedback (e.g. [56]), that were recently proposed. Moreover, we have shown that it is not possible to obtain a meanfield description à la Sznitman of the neural network, if the anatomical connections are too sparse or our three sources of variability are correlated. We have also proved that correlation depends not only on the input, but also on the topology of the network and on the correlation structure of the sources of randomness. To conclude, we have shown that for very special values of the parameters, the neurons become almost perfectly correlated even if the sources of randomness are independent. We have called this phenomenon stochastic synchronization, and we stress the fact that the formalism developed in this article is able to prove its existence for a completely generic anatomical connectivity whose eigenvalues satisfy a bland condition.
The dependence of network correlations on the neuron’s firing rates has been the subject of extensive investigations in recent years [57–59]. Our study of the dependence of the correlation on the strength of the external input allowed us to consider analytically this problem in our network. It is interesting to compare our results to those obtained in [57] for invitro real networks and for model integrateandfire networks. They reported that \(\operatorname{Corr} (\nu_{i} (t ),\nu_{j} (t ) )\) increases with the geometric mean of the firing rates. However, in our model, this is not always the case. This happened in our case for strongly correlated inputs and relatively large networks (a scenario compatible with the cases studied in [57]). However, in our model the network showed a nonmonotonic dependence of the correlation on the firing rates in other instances. A consequence of this nonmonotonic dependence is that rates and correlations expressed by recurrent networks can indeed act as separate information channels for the encoding of the strength of the external stimuli. We would also like to underline the fact that, according to those authors, the correlation between the firing rates is bounded by the correlation between the inputs. According to our model, this is generally correct, but in some cases the neural network is able to generate almost perfectly correlated firing rates even if the inputs are independent. This is the phenomenon of stochastic synchronization discussed in Sect. 10.
11.2 Strengths and Weaknesses of the Presented Approach
As discussed in Sect. 1, our approach presents some advantages when compared to other methods based on linear response theory [23–25], networks of stochastic binary neurons [26, 27], the linear noise approximation [28], the density functional approach [29], and large deviations theory [30–32]. These advantages consist in the possibility to use different sources of variability, to study synchronization and the effect of axonic delays, and to quantify finitesize effects also for smallsize networks. This means that our formalism lends itself to the possibility of multiple generalizations and extensions. Additional sources of stochasticity, such as a random threshold \(V_{T}\) in the activation function or a stochastic membrane time constant τ, can be introduced in the model even including correlations among different sources. As we stated above, delays in the transmission of the electric signal through the axons can be taken into account as well, following [60, 61]. Another possibility of further extensions of this study is the introduction of Hebbian learning. In this article we assumed for simplicity that the dynamics of the synaptic weights is already known, through the functions (2.5). However, in the case of synaptic plasticity the time evolution of the matrix \(J (t )\) depends on the membrane potentials \(V (t )\), so the system of differential equations (2.1) should be extended to include the differential version of Hebb’s learning rule. We also observe that in this article we have considered a deterministic topology T for the anatomical connectivity, which means that T is fixed from trial to trial. An interesting extension is the study of random topologies, in particular random regular graphs [62], but this problem will be tackled in another article.
A detailed analysis of the limits of our formalism for different values of all the parameters of the model and many graph topologies is beyond the purpose of the article. Nevertheless, being a perturbative approach, in general it is possible to assert that our method presents the same limits and advantages elucidated by (nonsingular) perturbation theory, to which the interested readers are referred. Our formalism can be applied also to other neural equations, such as the Wilson–Cowan model [63]. However, it is important to observe that it requires the existence of a stable equilibrium point, around which the neural equations are linearized. Therefore this technique cannot be used to study the correlation structure of spiking neurons, like those described by FitzHugh–Nagumo [64, 65] or the Hodgkin–Huxley [66] or integrateandfire [67] neurons, because in these systems spikes are generated by periodic orbits. For example, for FitzHugh–Nagumo and Hodgkin–Huxley neurons, stable periodic orbits occur around unstable equilibria, therefore our method predicts the divergence of the covariance matrix for \(t\rightarrow\infty\), which is clearly a consequence of the linearization of the neural equations. This also means that our formalism cannot be used to evaluate the correlation structure when equations (2.1) undergo neural oscillations generated through Hopf bifurcations, but can still describe damped oscillations around a stable focus in the phase space when the connectivity matrix has complex eigenvalues.
Another difficulty of our formalism is the need for an analytical expression of the eigenquantities of the Jacobian matrix \(\mathcal{J}\), of which we have shown a biologically relevant example in Sect. 6.3.2. Clearly spectra of brain areas that accomplish complex functions are difficult to evaluate analytically. For this reason we are forced to introduce some simplifications of the structural connectivity that we want to study. Another possibility is to determine the eigenquantities numerically, and then Eqs. (4.4)–(4.6) provide an algorithm for evaluating numerically the correlation structure of the network. Clearly even with this method the eigenquantities cannot be calculated for very large networks, since the matrix \(\mathcal{J}\) is \(N\times N\) and therefore grows quickly with the network size. However, the advantage of evaluating numerically Eqs. (4.4)–(4.6) is evident, compared to the Monte Carlo approach. Actually, if the randomness of the synaptic weights is taken into account (namely if \(\sigma_{2}\neq0\)), one needs to generate numerically by a random generator the \(N^{2}\) entries of the matrix W, according to the covariance matrix (2.6), which has \(N^{4}\) entries. This calculation must be repeated for a sufficiently high number of trials, according to the Monte Carlo method, so it is computationally much more expensive in terms of time and memory consumption.
It is important to observe that in this article we focused mainly on regular graphs for the sake of clarity, since for this class of connectivity matrices the eigenquantities of \(\mathcal{J}\) can be evaluated easily from those of \(T\circ\overline{J}^{c}\) through Eq. (6.2). For a general connectivity this relation is harder to find, but we underline that this is in part due to our choice to use a biologically realistic activation function \(\mathscr{A} (\cdot )\) (see Eqs. (3.9) and (3.10)). Usually, in order to obtain analytical results, in the literature there is a wide use of piecewise linear activation functions (e.g. in [48, 68, 69]). Clearly in this case it is much easier to evaluate the eigenquantities of \(\mathcal{J}\) from those of \(T\circ\overline{J}^{c}\), taking some care at the connection points between the segments of \(\mathscr {A} (\cdot )\), where the piecewise linear function is not differentiable.
Another useful feature of our approach is that it allowed the calculation of the dependence on the strength of the external input of correlations of arbitrary order (not only pairwise correlations). This feature will be useful for the evaluation of the ability of networks to encode genuinely additional information in the variations with inputs of higherorder correlations, a subject that has been under intense theoretical [70] and experimental debate in recent years [71, 72].
11.3 Analyzing the Consequences of Structural Damage
Similarly to spectral graph theory, where the properties of a graph are studied in relationship to its characteristic polynomial and eigenquantities, in this article we have found the relation between the functional connectivity and the spectrum of the underlying structural connectivity. This, in principle, allows one to study the effect on the functional connectivity caused by lesions to the synaptic connections. These structural damages can be modeled as perturbations to the topology matrix. Thus, in principle they can be studied by perturbative techniques such as those described in [73–77]. This branch of graph theory deals with discrete perturbations (such as the removal of connections or vertices from a given graph), as opposed to the Rayleigh–Schrödinger theory from quantum mechanics, that studies the effect of continuous perturbations to the generalized eigenvalue problem. This approach would help to understand abnormal functional behavior, complementing other studies of the consequences of structural damage, e.g. [78].
11.4 Possible Extensions to Other Measures of Communication Among Neurons
It is also interesting to observe that the correlation structure can be used to estimate causal relations between neurons or neural populations. This can be achieved in many ways. However, in our view a promising direction is to take advantage of hierarchical clustering techniques already used in economics, whose potential application is briefly described as follows. According to [79], the correlation structure can be used to define a distance measure \(d_{ij} (t )\overset{\mathrm{def}}{=}\sqrt{2 (1\operatorname {Corr} (V_{i} (t ),V_{j} (t ) ) )}\) between every pair of neurons. Clearly we are not interested in the hierarchical structure of single neurons, but rather in that of mesoscopic or macroscopic areas. For this reason, from \(d_{ij} (t )\) we have to define an arbitrary distance between these areas of the brain (e.g. the mean distance between all the pairs of neurons). Then, from the distance matrix of the areas, we can determine the minimum spanning tree of the system, a concept introduced in the context of graph theory to find the most relevant (or more informative) connections in a network. Finally, on the minimum spanning tree it is possible to define an ultrametric distance, which in turn allows us to build a dendrogram (i.e. a hierarchical tree) in an unambiguous way, by using techniques such as UPGMA [80].
11.5 Concluding Statement
We have shown that the formalism introduced in this article can be effectively used to calculate the functional connectivity of neurons within a firingrate network model. In this article we concentrated mostly on computing the Pearson correlation among all pairs of neurons in the network. However, the work reported in this paper also lays the basis for computing more refined measures of functional connectivity (such as those based on information theory). This in turn will allow in future studies the analytical quantification of the transmission of information among the elements of this recurrent network and of how information transmission is modulated by factors such as the strength and dynamics of external inputs.
Declarations
Acknowledgements
DF was supported by the Autonomous Province of Trento, Call “Grandi Progetti 2012”, project “Characterizing and improving brain mechanisms of attention—ATTEND”. He was also supported by the ERC grant NerVi no. 227747, the FACETSITN MarieCurie Initial Training Network no. 237955 and the IP project BrainScaleS no. 269921.
OF was partially supported by the European Union Seventh Framework Programme (FP7/20072013) under grant agreement no. 269921 (BrainScaleS), no. 318723 (Mathemacs), and by the ERC advanced grant NerVi no. 227747.
SP was supported by the SICODE project of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FETOpen grant no. FP7284553, and by the Autonomous Province of Trento, Call “Grandi Progetti 2012”, project “Characterizing and improving brain mechanisms of attention—ATTEND”.
The funders had no role in study design, data collection and analysis, decision to publish, interpretation of results, or preparation of the manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
 Womelsdorf T, Schoffelen JM, Oostenveld R, Singer W, Desimone R, Engel AK, Fries P. Modulation of neuronal interactions through neuronal synchronization. Science. 2007;316(5831):1609–12. View ArticleGoogle Scholar
 Friston KJ. Functional and effective connectivity: a review. Brain Connect. 2011;1:13–36. View ArticleGoogle Scholar
 Sporns O, Chialvo D, Kaiser M, Hilgetag C. Organization, development and function of complex brain networks. Trends Cogn Sci. 2004;8(9):418–25. View ArticleGoogle Scholar
 Ponten SC, Daffertshofer A, Hillebrand A, Stam CJ. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model. NeuroImage. 2010;52(3):985–94. View ArticleGoogle Scholar
 Koch M. An investigation of functional and anatomical connectivity using magnetic resonance imaging. NeuroImage. 2002;16(1):241–50. View ArticleGoogle Scholar
 Eickhoff SB, Jbabdi S, Caspers S, Laird AR, Fox PT, Zilles K, Behrens TEJ. Anatomical and functional connectivity of cytoarchitectonic areas within the human parietal operculum. J Neurosci. 2010;30:6409–21. View ArticleGoogle Scholar
 Cabral J, Hugues E, Kringelbach ML, Deco G. Modeling the outcome of structural disconnection on restingstate functional connectivity. NeuroImage. 2012;62(3):1342–53. View ArticleGoogle Scholar
 Deco G, PonceAlvarez A, Mantini D, Romani GL, Hagmann P, Corbetta M. Restingstate functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. J Neurosci. 2013;33(27):11239–52. View ArticleGoogle Scholar
 Hopfield JJ. Neurons with graded response have collective computational properties like those of twostate neurons. Proc Natl Acad Sci USA. 1984;81(10):3088–92. View ArticleGoogle Scholar
 David O, Cosmelli D, Friston KJ. Evaluation of different measures of functional connectivity using a neural mass model. NeuroImage. 2004;21:659–73. View ArticleGoogle Scholar
 Sznitman A. Nonlinear reflecting diffusion process, and the propagation of chaos and fluctuations associated. J Funct Anal. 1984;56:311–36. MathSciNetView ArticleMATHGoogle Scholar
 Sznitman A. A propagation of chaos result for Burgers’ equation. Probab Theory Relat Fields. 1986;71:581–613. MathSciNetView ArticleMATHGoogle Scholar
 Sznitman A. Topics in propagation of chaos. In: Hennequin PL, editor. Ecole d’eté de probabilités de saintflour XIX – 1989. Berlin: Springer; 1991. Chap. 3; p. 165–251. (Lecture notes in mathematics; vol. 1464). View ArticleGoogle Scholar
 Tanaka H. Probabilistic treatment of the Boltzmann equation of Maxwellian molecules. Probab Theory Relat Fields. 1978;46:67–105. MATHGoogle Scholar
 Tanaka H. Central limit theorem for a simple diffusion model of interacting particles. Hiroshima Math J. 1981;11(2):415–23. MathSciNetMATHGoogle Scholar
 Tanaka H. Some probabilistic problems in the spatially homogeneous Boltzmann equation. In: Kallianpur G, editor. Theory and application of random fields. Berlin: Springer; 1983. p. 258–67. (Lecture notes in control and information sciences; vol. 49). View ArticleGoogle Scholar
 McKean H. A class of Markov processes associated with nonlinear parabolic equations. Proc Natl Acad Sci USA. 1966;56(6):1907–11. MathSciNetView ArticleMATHGoogle Scholar
 McKean H. Propagation of chaos for a class of nonlinear parabolic equations. In: Stochastic differential equations (Lecture series in differential equations, session 7, Catholic University, 1967). Arlington: Air Force Office of Scientific Research; 1967. p. 41–57. Google Scholar
 Samuelides M, Cessac B. Random recurrent neural networks dynamics. Eur Phys J Spec Top. 2007;142(1):89–122. View ArticleGoogle Scholar
 Touboul J, Hermann G, Faugeras O. Noiseinduced behaviors in neural mean field dynamics. SIAM J Appl Dyn Syst. 2012;11(1):49–81. MathSciNetView ArticleMATHGoogle Scholar
 Baladron J, Fasoli D, Faugeras O, Touboul J. Meanfield description and propagation of chaos in networks of Hodgkin–Huxley and Fitzhugh–Nagumo neurons. J Math Neurosci. 2012;2(1):10. MathSciNetView ArticleGoogle Scholar
 Touboul J. The propagation of chaos in neural fields. Ann Appl Probab. 2014;24(3):1298–328. MathSciNetView ArticleMATHGoogle Scholar
 Pernice V, Staude B, Cardanobile S, Rotter S. How structure determines correlations in neuronal networks. PLoS Comput Biol. 2011;7(5):e1002059. MathSciNetView ArticleGoogle Scholar
 Pernice V, Staude B, Cardanobile S, Rotter S. Recurrent interactions in spiking networks with arbitrary topology. Phys Rev E. 2012;85:031916. View ArticleGoogle Scholar
 Trousdale J, Hu Y, SheaBrown E, Josić K. Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol. 2012;8(3):e1002408. View ArticleGoogle Scholar
 Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Phys Rev E. 1994;50:3171–91. View ArticleGoogle Scholar
 Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327(5965):587–90. View ArticleGoogle Scholar
 Bressloff PC. Stochastic neural field theory and the systemsize expansion. SIAM J Appl Math. 2010;70(5):1488–521. MathSciNetView ArticleGoogle Scholar
 Buice MA, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Comput Biol. 2013;9(1):e1002872. MathSciNetView ArticleGoogle Scholar
 Faugeras O, MacLaurin J. A large deviation principle for networks of rate neurons with correlated synaptic weights. BMC Neurosci. 2013;14(Suppl 1):P252. View ArticleGoogle Scholar
 Faugeras O, Maclaurin J. Asymptotic description of stochastic neural networks. I. Existence of a large deviation principle. C R Math. 2014;352(10):841–6. MathSciNetView ArticleGoogle Scholar
 Faugeras O, Maclaurin J. Asymptotic description of stochastic neural networks. II. Characterization of the limit law. C R Math. 2014;352(10):847–52. MathSciNetView ArticleGoogle Scholar
 Ditlevsen S, Samson A. Introduction to stochastic models in biology. In: Bachar M, Batzel J, Ditlevsen S, editors. Stochastic biomathematical models. Berlin: Springer; 2013. p. 3–34. View ArticleGoogle Scholar
 Bachar M, Batzel J, Ditlevsen S. Stochastic biomathematical models: with applications to neuronal modeling. Berlin: Springer; 2013. (Lecture notes in mathematics; vol. 2058). View ArticleGoogle Scholar
 Magnus W. On the exponential solution of differential equations for a linear operator. Commun Pure Appl Math. 1954;7(4):649–73. MathSciNetView ArticleMATHGoogle Scholar
 Isserlis L. On a formula for the productmoment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika. 1918;12(1/2):134–9. View ArticleGoogle Scholar
 Chai B, Walther D, Beck D, FeiFei L. Exploring functional connectivities of the human brain using multivariate information analysis. In: Bengio Y, Schuurmans D, Lafferty JD, Williams CKI, Culotta A, editors. Advances in neural information processing systems 22. Red Hook: Curran Associates; 2009. p. 270–8. Google Scholar
 Thatcher RW, Krause PJ, Hrybyk M. Corticocortical associations and EEG coherence: a twocompartmental model. Electroencephalogr Clin Neurophysiol. 1986;64:123–43. View ArticleGoogle Scholar
 Honey CJ, Kötter R, Breakspear M, Sporns O. Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc Natl Acad Sci USA. 2007;104(24):10240–5. View ArticleGoogle Scholar
 Besserve M, Schölkopf B, Logothetis NK, Panzeri S. Causal relationships between frequency bands of extracellular signals in visual cortex revealed by an information theoretic analysis. J Comput Neurosci. 2010;29(3):547–66. View ArticleGoogle Scholar
 Sato JR, Junior EA, Takahashi DY, De Maria FM, Brammer MJ, Morettin PA. A method to produce evolving functional connectivity maps during the course of an fMRI experiment using waveletbased timevarying granger causality. NeuroImage. 2006;31(1):187–96. View ArticleGoogle Scholar
 Bosman C, Schoffelen JM, Brunet N, Oostenveld R, Bastos A, Womelsdorf T, Rubehn B, Stieglitz T, De Weerd P, Fries P. Attentional stimulus selection through selective synchronization between monkey visual areas. Neuron. 2012;75(5):875–88. View ArticleGoogle Scholar
 Barnett L, Seth AK. The MVGC multivariate Granger causality toolbox: a new approach to Grangercausal inference. J Neurosci Methods. 2014;223:50–68. View ArticleGoogle Scholar
 Tee GJ. Eigenvectors of block circulant and alternating circulant matrices. NZ J Math. 2007;36:195–211. MathSciNetMATHGoogle Scholar
 Boucsein C, Nawrot MP, Schnepel P, Aertsen A. Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround. Front Neurosci. 2011;5:32. View ArticleGoogle Scholar
 Brouwer AE, Haemers WH. Spectra of graphs. New York: Springer; 2011. Google Scholar
 Munarini E, Perelli Cippo C, Scagliola A, Zagaglia Salvi N. Double graphs. Discrete Math. 2008;308(23):242–54. MathSciNetView ArticleMATHGoogle Scholar
 Hansel D, Sompolinsky H. In: Koch C, Segev I, editors. Modeling feature selectivity in local cortical circuits. Cambridge: MIT Press; 1998. Chap. 13; p. 1–25. Google Scholar
 Genz A. Numerical computation of multivariate normal probabilities. J Comput Graph Stat. 1992;1:141–9. Google Scholar
 Mazzoni A, Panzeri S, Logothetis NK, Brunel N. Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Comput Biol. 2008;4(12):e1000239. MathSciNetView ArticleGoogle Scholar
 SheaBrown E, Josić K, De La Rocha J, Doiron B. Correlation and synchrony transfer in integrateandfire neurons: basic properties and consequences for coding. Phys Rev Lett. 2008;100:108102. View ArticleGoogle Scholar
 Quiroga RQ, Panzeri S. Extracting information from neuronal populations: information theory and decoding approaches. Nat Rev Neurosci. 2009;10(3):173–85. View ArticleGoogle Scholar
 Cavallari S, Panzeri S, Mazzoni A. Comparison of the dynamics of neural interactions in integrateandfire networks with currentbased and conductancebased synapses. Front Neural Circuits. 2014;8:12. View ArticleGoogle Scholar
 Pillai SU, Suel T, Cha S. The Perron–Frobenius theorem: some of its applications. IEEE Signal Process Mag. 2005;22(2):62–75. View ArticleGoogle Scholar
 Renart A, MorenoBote R, Wang XJ, Parga N. Meandriven and fluctuationdriven persistent activity in recurrent networks. Neural Comput. 2007;19(1):1–46. MathSciNetView ArticleMATHGoogle Scholar
 Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neuralnetwork activity by inhibitory feedback. PLoS Comput Biol. 2012;8(8):e1002596. MathSciNetView ArticleGoogle Scholar
 De La Rocha J, Doiron B, SheaBrown E, Josić K, Reyes A. Correlation between neural spike trains increases with firing rate. Nature. 2007;448(7155):802–6. View ArticleGoogle Scholar
 Ecker AS, Berens P, Cotton RJ, Subramaniyan M, Denfield GH, Cadwell CR, Smirnakis SM, Bethge M, Tolias AS. State dependence of noise correlations in macaque primary visual cortex. Neuron. 2014;82(1):235–48. View ArticleGoogle Scholar
 Goris RL, Movshon JA, Simoncelli EP. Partitioning neuronal variability. Nat Neurosci. 2014;17(6):858–65. View ArticleGoogle Scholar
 Frank TD, Beek PJ. Stationary solutions of linear stochastic delay differential equations: applications to biological systems. Phys Rev E. 2001;64:021917. View ArticleGoogle Scholar
 Yi S, Ulsoy AG. Solution of a system of linear delay differential equations using the matrix Lambert function. In: Proceedings of the American control conference; 2006. p. 2433–8. Google Scholar
 Wormald NC. Models of random regular graphs. In: Lamb J, Preece D, editors. Surveys in combinatorics, 1999. Cambridge: Cambridge University Press; 1999. p. 239–98. (London mathematical society lecture note series; vol. 276). View ArticleGoogle Scholar
 Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24. View ArticleGoogle Scholar
 FitzHugh R. Impulses and physiological states in theoretical models of nerve membrane. Biophys J. 1961;1(6):445–66. View ArticleGoogle Scholar
 Nagumo J, Arimoto S, Yoshizawa S. An active pulse transmission line simulating nerve axon. Proc Inst Radio Eng. 1962;50(10):2061–70. Google Scholar
 Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117(4):500–44. View ArticleGoogle Scholar
 Lapicque L. Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarization. J Physiol Pathol Gén. 1907;9:620–35. Google Scholar
 Campbell SR, Wang DL. Synchronization and desynchronization in a network of locally coupled Wilson–Cowan oscillators. IEEE Trans Neural Netw. 1996;7(3):541–54. View ArticleGoogle Scholar
 Ledoux E, Brunel N. Dynamics of networks of excitatory and inhibitory neurons in response to timedependent inputs. Front Comput Neurosci. 2011;5:25. View ArticleGoogle Scholar
 Macke JH, Opper M, Bethge M. Common input explains higherorder correlations and entropy in a simple model of neural population activity. Phys Rev Lett. 2011;106:208102. View ArticleGoogle Scholar
 Montani F, Ince RAA, Senatore R, Arabzadeh E, Diamond ME, Panzeri S. The impact of highorder interactions on the rate of synchronous discharge and information transmission in somatosensory cortex. Philos Trans R Soc A, Math Phys Eng Sci. 2009;367(1901):3297–310. MathSciNetView ArticleMATHGoogle Scholar
 GranotAtedgi E, Tkac̆ik G, Segev R, Schneidman E. Stimulusdependent maximum entropy models of neural population codes. PLoS Comput Biol. 2013;9(3):e1002922. MathSciNetView ArticleGoogle Scholar
 Rowlinson P. On angles and perturbations of graphs. Bull Lond Math Soc. 1988;20(3):193–7. MathSciNetView ArticleMATHGoogle Scholar
 Rowlinson P. Graph perturbations. In: Keedwell AD, editor. Surveys in combinatorics, 1999. Cambridge: Cambridge University Press; 1991. p. 187–220. (London mathematical society lecture note series; vol. 166). Google Scholar
 Rowlinson P. More on graph perturbations. Bull Lond Math Soc. 1990;22(3):209–16. MathSciNetView ArticleMATHGoogle Scholar
 Rowlinson P. The characteristic polynomials of modified graphs. Discrete Appl Math. 1996;67(1–3):209–19. MathSciNetView ArticleMATHGoogle Scholar
 Cvetković DM, Rowlinson P, Simić S. Eigenspaces of graphs. Cambridge: Cambridge University Press; 1997. (Encyclopedia of mathematics and its applications). View ArticleMATHGoogle Scholar
 Van Den Heuvel MP, Sporns O. Richclub organization of the human connectome. J Neurosci. 2011;31(44):15775–86. View ArticleGoogle Scholar
 Mantegna RN. Hierarchical structure in financial markets. Eur Phys J B. 1999;11:193–7. View ArticleGoogle Scholar
 Sokal RR, Michener CD. A statistical method for evaluating systematic relationships. Univ Kans Sci Bull. 1958;28:1409–38. Google Scholar
 Minai AA, Williams RD. Original contribution: on the derivatives of the sigmoid. Neural Netw. 1993;6(6):845–53. View ArticleGoogle Scholar
 Carlitz L. Eulerian numbers and polynomials. Math Mag. 1959;32(5):247–60. MathSciNetView ArticleMATHGoogle Scholar
 Miller SJ. An identity for sums of polylogarithm functions. Integers. 2008;8:A15. Google Scholar
 Deeba EY, Rodriguez DM. Stirling’s series and Bernoulli numbers. Am Math Mon. 1991;98(5):423–6. MathSciNetView ArticleMATHGoogle Scholar
 Wood D. The computation of polylogarithms. Canterbury (UK): Computing Laboratory, University of Kent; 1992. Report No.: 1592.
 Adegoke K, Layeni O. The higher derivatives of the inverse tangent function and rapidly convergent BBPtype formulas for pi. Appl Math ENotes. 2010;10:70–5. MathSciNetMATHGoogle Scholar