PathIntegral Methods for Analyzing the Effects of Fluctuations in Stochastic Hybrid Neural Networks
 Paul C. Bressloff^{1}Email author
https://doi.org/10.1186/s134080140016z
© Bressloff; licensee Springer. 2015
Received: 3 September 2014
Accepted: 11 December 2014
Published: 27 February 2015
Abstract
We consider applications of pathintegral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewisedeterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant \(\tau_{a}\) and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter \(\epsilon=\tau _{a}/\tau\), which can be used to develop various asymptotic expansions of the corresponding pathintegral representation of the stochastic dynamics. First, we derive a variational principle for maximumlikelihood paths of escape from a metastable state (large deviations in the small noise limit \(\epsilon\rightarrow0\)). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a \(1/\epsilon\)loop expansion of the path integral, and use this to derive corrections to voltagebased meanfield equations, analogous to the modified activitybased equations generated from a neural master equation.
Keywords
Pathintegrals Large deviations Stochastic neural networks Stochastic hybrid systems1 Introduction
One of the major challenges in neuroscience is developing our understanding of how noise at the molecular and cellular levels affects dynamics and information processing at the macroscopic level of synaptically coupled neuronal populations. It is well known that the spike trains of individual cortical neurons in vivo tend to be very noisy, having interspike interval (ISI) distributions that are close to Poisson [1, 2]. Indeed, one observes trialtotrial variability in spike trains, even across trials in which external stimuli are identical. On the other hand, neurons are continuously bombarded by thousands of synaptic inputs, many of which are uncorrelated, so that an application of the law of large numbers would suggest that total input fluctuations are small. This would make it difficult to account for the Poissonlike behavior of individual neurons, even when stochastic ion channel fluctuations or random synaptic background activity is taken into account. One paradigm for reconciling these issues is the socalled balanced network [3–5]. In such networks, each neuron is driven by a combination of strong excitation and strong inhibition, which mainly cancel each other out, so that the remaining fluctuations occasionally and irregularly push the neuron over the firing threshold. Even in the absence of any external sources of noise, the resulting deterministic dynamics is chaotic and neural outputs are Poissonlike. Interestingly, there is some experimental evidence that cortical networks can operate in a balanced regime [6].
Another emergent feature of balanced networks is that they can support an asynchronous state characterized by large variability in single neuron spiking, and yet arbitrarily small pairwise correlations, even in the presence of substantial amounts of shared inputs [7]. Thus there is a growing consensus that the trialtotrial irregularity in the spiking of individual neurons is often unimportant, and that information is typically encoded in firing rates. There is then another level of neural variability, namely, trialtotrial variations in the firing rates themselves. Recent physiological data shows that the onset of a stimulus reduces firingrate fluctuations in cortical neurons, while having little or no effect on the spiking variability [8]. LitwinKumar and Doiron have recently shown how these two levels of stochastic variability can emerge in a balanced network of randomly connected spiking neurons, in which a small amount of clustered connections induces firingrate fluctuations superimposed on spontaneous spike fluctuations [9].
Various experimental and computational studies of neural variability thus motivate the incorporation of noise into ratebased neural network models [10]. One approach is to add extrinsic noise terms to deterministic models resulting in a neural Langevin equation [11–15]. An alternative approach is to assume that noise arises intrinsically as a collective population effect, and to describe the stochastic dynamics in terms of a neural master equation [16–20]. In the latter case, neurons are partitioned into a set of M local homogeneous populations labeled \(\alpha=1,\ldots,M\), each consisting of \({\mathcal{N}}\) neurons. The state of each population at time t is specified by the number \({\mathcal{N}}_{\alpha}(t)\) of active neurons in a sliding window \((t,t+\Delta t]\), and transition rates between the discrete states are chosen so that standard ratebased models are obtained in the meanfield limit, where statistical correlations can be ignored. There are two versions of the neural master equation, which can be distinguished by the size of the sliding window width Δt. (Note that the stochastic models are keeping track of changes in population activity.) One version assumes that each population operates close to an asynchronous state for large \({\mathcal{N}}\) [18, 19], so that onestep changes in population activity occur relatively slowly. Hence, one can set \(\Delta t =1\) and take \({\mathcal{N}}\) to be large but finite. The other version of the neural master equation assumes that population activity is approximately characterized by a Poisson process [17, 20]. In order to maintain a onestep jump Markov process, it is necessary to take the limits \(\Delta t \rightarrow0\), \({\mathcal{N}}\rightarrow\infty\) such that \({\mathcal{N}}\Delta t=1\). Thus, one considers the number of active neurons in an infinite background sea of inactive neurons, which is reasonable if the networks are in low activity states. (Note that it is also possible to interpret the master equation of Buice et al. in terms of activity states of individual neurons rather than populations [17, 20].)
One way to link the two versions of the neural master equation is to extend the Doi–Peliti pathintegral representation of chemical master equations [21–23] to the neural case; the difference between the two versions then reduces to a different choice of scaling of the underlying action functional [18]. Buice et al. [17, 20] used diagrammatic perturbations methods (Feynman graphs) to generate a truncated moment hierarchy based on factorial moments, and thus determined corrections to meanfield theory involving coupling to twopoint and higherorder cumulants. They also used renormalization group methods to derive scaling laws for statistical correlations close to criticality, that is, close to a bifurcation point of the underlying deterministic model [17]. On the other hand, Bressloff [18, 19] showed how the pathintegral representation of the master equation can be used to investigate large deviations or rare event statistics underlying escape from the basin of attraction of a metastable state, following along analogous lines to previous work on large deviations in chemical master equations [24–26].
One limitation of both versions of the neural master equation is that they neglect the dynamics of synaptic currents. The latter could be particularly significant if the time scale τ of synaptic dynamics is larger than the window width Δt. Therefore, we recently extended the Buice et al. neural master equation by formulating the network population dynamics in terms of a stochastic hybrid system also known as a ‘velocity’ jump Markov process [27]. The state of each population is now described in terms of two stochastic variables \(U_{\alpha}(t)\) and \({\mathcal{N}}_{\alpha}(t)\). The synaptic variables \(U_{\alpha}(t)\) evolve according to piecewisedeterministic dynamics describing, at the population level, synapses driven by spiking activity. These equations are only valid between jumps in spiking activity \({\mathcal{N}}_{\alpha}(t)\), which are described by a jump Markov process whose transition rates depend on the synaptic variables. We also showed how asymptotic methods recently developed to study metastability in other stochastic hybrid systems, such as stochastic ion channels, motordriven intracellular cargo transport, and gene networks [28–32], can be extended to analyze metastability in stochastic hybrid neural networks, in a regime where the synaptic dynamics is much slower than the spiking dynamics. In the case of ion channels, \({\mathcal{N}}_{\alpha }\) would represent the number of open channels of type α, whereas \(U_{\alpha}\) would be replaced by the membrane voltage V. On the other hand, for intracellular transport, \({\mathcal{N}}_{\alpha}\) would be the number of motors of type α actively transporting a cargo and \(U_{\alpha}\) would be replaced by spatial position along the track.
In this paper we show how a pathintegral representation of a stochastic hybrid neural network provides a unifying framework for a variety of asymptotic perturbation methods. The basic hybrid neural network model is described in Sect. 2, where we consider several limiting cases. In Sect. 3, we reprise the pathintegral construction of Bressloff and Newby [33], highlighting certain features that were not covered in the original treatment, including the connection with largedeviation principles [34], and potential difficulties in the thermodynamic limit \({\mathcal{N}}\rightarrow\infty\). In Sect. 4, we derive the basic variational principle that can be used to explore maximumlikelihood paths of escape from a metastable state, and relate the theory to the underlying Hamiltonian structure of the pathintegral representation. In Sect. 5, we show how the pathintegral representation provides an efficient method for deriving a diffusion approximation of a stochastic hybrid neural network. Although the diffusion approximation breaks down when considering escape problems, it provides useful insights into the effects of fluctuations within the basin of attraction of a given solution. We illustrate this by using the diffusion approximation to explore the effects of noise on neural pattern formation in a spatially structured network. In particular, we show how noise expands the parameter regime over which patterns can be observed, in an analogous fashion to stochastic PDEs. Finally, in Sect. 6, we use the pathintegral representation to derive corrections to voltagebased meanfield equations, along analogous lines to the analysis of activitybased meanfield equations arising from the neural master equation [17, 20].
2 Stochastic Hybrid Network Model
 (i)
A quasisteadystate (QSS) diffusion approximation of the stochastic hybrid system, in which the CK equation (2.6) reduces to a Fokker–Planck equation [27]. This exploits the fact that for small ϵ there are typically a large number of transitions between different firing states n while the synaptic currents u hardly change at all. This implies that the system rapidly converges to the (quasi) steady state \(\rho(\mathbf{u},\mathbf{n})\), which will then be perturbed as u slowly evolves.
 (ii)
The diffusion approximation captures the Gaussianlike fluctuations within the basin of attraction of a fixed point of the meanfield equations. However, for small ϵ this yields exponentially large errors for the transition rates between metastable states. (A similar problem arises in approximating chemical and neural master equations by a Fokker–Planck equation in the large N limit [19, 24, 40].) However, one can use a Wentzel–Kramers–Brillouin (WKB) approximation of solutions to the full CK equation to calculate the mean first passage time for escape [27].
 (iii)
Another way to analyze the dynamics of a stochastic hybrid network is to derive moment equations. However, for a nonlinear system, this yields an infinite hierarchy of coupled moment equations, resulting in the problem of moment closure. In the case of small ϵ, one can expand the moment equations to some finite order in ϵ.
3 PathIntegral Representation
3.1 OnePopulation Model
3.2 LargeDeviation Principles
It is important to point out that the formal derivation of the path integral (3.10), see also [33], involves a few steps that have not been justified rigorously. First, we ‘gauge fix’ the path integral by setting \(q_{j}=\epsilon p_{j}\) with \(p_{j}\) pure imaginary. However, when we carry out steepest descents, we assume that the dominant contribution to the path integral in the complex pplane occurs for real \(p_{j}\). (There is an assumption as regards analytic continuation.) This then allows us to apply the Perron–Frobenius theorem to the linear operator of the eigenvalue equation. Second, we have not established that the discrete path integral converges to a welldefined functional measure in the continuum limit. Nevertheless, it turns out that the resulting action \(S[u,p]\) is identical to one obtained using largedeviation theory [41–43]. This connection has recently been established by Bressloff and Faugeras [34]. We briefly summarize the main results here.

For any \((u,\psi)\in C([0,T]) \times[0,1]^{\varGamma}\) defineThen, for any given path \(\{(u(t),\psi(t))\}_{t\in[0,T]} \in {\mathcal{Y}}_{u_{0}}\) ,$$ j(u,\psi)=\sup_{z \in(0,\infty)^{\varGamma}} \sum _{(n,n') \in\varGamma \times \varGamma} \psi_{n} W_{n'n}(u) \biggl[1 \frac{z_{n'}}{z_{n}} \biggr]. $$(3.15)where the rate function \(J_{T}\,\colon {\mathcal{Y}}_{u_{0}} \to[0,\infty)\) is given by$$ \mathbb {P}^{\epsilon}_{u_{0},n_{0}} \bigl[ \bigl\{ \bigl(u(t),\psi(t)\bigr)\bigr\} _{t \in[0,T]} \bigr]\sim \mathrm {e}^{J_{T}(\{(u(t),\psi(t))\}_{t\in[0,T]})/ \epsilon}, $$(3.16)Here the symbol ∼ means asymptotic logarithmic equivalence in the limit \(\epsilon\rightarrow0\) .$$ J_{T}\bigl(\bigl\{ \bigl(u(t),\psi(t)\bigr)\bigr\} _{t\in[0,T]}\bigr)=\int_{0}^{T} j\bigl(u(t), \psi(t)\bigr)\,dt. $$(3.17)

Given an element \(\{u(t)\}_{t\in[0,T]}\in C([0,T])\) , we havewhere the rate function \(J_{[0,T]}\,\colon C([0,T],\varOmega) \to[0,\infty)\) is given by$$\mathbb {P}^{\epsilon}_{u_{0},n_{0}} \bigl[ \bigl\{ u(t)\bigr\} _{t \in[0,T]} \bigr]\sim \mathrm {e}^{J_{T}(\{u(t)\}_{t\in[0,T]})/\epsilon}, $$$$ J_{T}\bigl(\bigl\{ u(t)\bigr\} _{t\in[0,T]}\bigr)= \inf_{\{\psi(t)\}_{t\in[0,T]}: \dot {u}(t)=\sum _{n}v_{n}(u)\psi_{n}} J_{T} \bigl(\bigl\{ \bigl(u(t),\psi(t)\bigr)\bigr\} _{t\in [0,T]}\bigr). $$(3.18)
3.3 Calculation of Perron Eigenvalue
3.4 Multipopulation Model
4 A Variational Principle and Optimal Paths of Escape
5 The Diffusion Approximation and Neural Pattern Formation
5.1 NoiseInduced Pattern Formation
5.2 Continuum Limit
From a numerical perspective, any computer simulation would involve rediscretizing space and then solving a timediscretized version of the resulting stochastic neural field equation. On the other hand, in order to investigate analytically the effects of noise on spatiotemporal dynamics such as traveling waves, it is more useful to work directly with stochastic neural fields. One can then adapt various PDE methods for studying noise in spatially extended systems [15, 54–58]. Finally, note that a largedeviation principle for a stochastic neural field with additive noise has been developed in [59].
6 Generating Functionals and the \(1/\epsilon\) Loop Expansion
One step beyond the Gaussian approximation is to consider corrections to the meanfield equation (2.19), which couple the mean synaptic current with higherorder moments. As demonstrated previously for neural master equations [17, 18, 20], path integrals provide a systematic method for generating the hierarchy of moment equations. We will illustrate this by calculating the lowestorder correction to meanfield theory based on coupling to secondorder correlations. One could then take investigate the bifurcation structure of the higherdimensional dynamical system along analogous lines to Touboul and Ermentrout [13]. However, certain caution must be exercised, since one does not keep track of the validity of the truncated moment equations. Note that the pathintegral methods used in this section were originally introduced within the context of stochastic processes by Martin–Siggia–Rose [60], and have previously been applied to stochastic neural networks by Sompolinsky et al. [61, 62] and Buice et al. [17, 20].
6.1 Generating Functional and Moments
6.2 Effective Action and Corrections to MeanField Equations
The corrections to meanfield theory for a stochastic hybrid neural network differ significantly from those derived for the Buice et al. master equation [17, 20]. There are two primary sources of such differences. One arises from the fact that the mean equation is in ‘Amari form’ (with the weight matrix outside the nonlinearity). This accounts for all the difference in (6.16) for the mean, which would otherwise be identical to that of Buice et al., and the last term involving C in (6.17). The other difference is in the nonhomogeneous source term for the C equation, which appears as \(\sum_{\gamma}w_{\alpha\gamma}F(u_{\gamma })w_{\beta \gamma} \). Whereas the Buice et al. correlations are determined by multiple network motifs (with the lowest order being the direct connection \(w_{\alpha\beta}\) from β to α), our result for the hybrid model indicates that the source term is given by divergent motifs indicating common input from a third population (population γ → populations α, β).
7 Discussion
In conclusion, we have constructed a pathintegral representation of solutions to a stochastic hybrid neural network, and shown how this provides a unifying framework for carrying out various perturbation schemes for analyzing the stochastic dynamics, namely, large deviations, diffusion approximations, and corrections to meanfield equations. We highlighted the fact that the pathintegral action can be expressed in terms of a Hamiltonian, which is given by the Perron eigenvalue of an appropriately defined linear operator. The latter depends on the transition rates and drift terms of the underlying hybrid system. The resulting action is consistent with that obtained using largedeviation theory.
In terms of the theory of stochastic neural networks, our hybrid model extends the neural master equation to include the effects of synaptic currents. In the limit of fast synapses one recovers the neural master equation, which can be viewed as a stochastic version of the ‘Wilson–Cowan’ rate equations (with the weight matrix inside the nonlinearity). On the other hand, in the case of slow synapses, one obtains a stochastic version of the ‘Amari’ rate equations. This leads to significant differences in the corrections to the meanfield equations. Finally, it should be noted that the pathintegral formulation presented here can be applied to more general stochastic hybrid systems such as stochastic ion channels, molecular motors, and gene networks [28–32]. Thus one can view our pathintegral construction as the hybrid analog of the Doi–Peliti path integral for master equations.
Declarations
Acknowledgements
PCB was supported by the National Science Foundation (DMS1120327). Part of the work was conducted while PCB was visiting the NeuroMathComp group of Olivier Faugeras at INRIA, SophiasAntipolis, where he holds an International Chair.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
 Softky WR, Koch C. Cortical cell should spike regularly but do not. Neural Comput. 1992;4:643–6. View ArticleGoogle Scholar
 Faisal AA, Selen LPJ, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci. 2008;9:292. View ArticleGoogle Scholar
 Shadlen MN, Newsome WT. Noise, neural codes and cortical organization. Curr Opin Neurobiol. 1994;4:569–79. View ArticleGoogle Scholar
 van Vreeswijk C, Sompolinsky H. Chaotic balanced state in a model of cortical circuits. Neural Comput. 1998;10:1321–71. View ArticleGoogle Scholar
 Vogels TP, Abbott LF. Signal propagation and logic gating in networks of integrateandfire neurons. J Neurosci. 2005;25:786–95. View ArticleGoogle Scholar
 London M, Roth A, Beeren L, Hausser M, Latham PE. Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature. 2010;466:123–7. View ArticleGoogle Scholar
 Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, Harris KD. The asynchronous state in cortical circuits. Science. 2010;327:587–90. View ArticleGoogle Scholar
 Churchland MM, et al.. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat Neurosci. 2010;13:369–78. View ArticleGoogle Scholar
 LitwinKumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat Neurosci. 2012;15:1498–505. View ArticleGoogle Scholar
 Bressloff PC. Spatiotemporal dynamics of continuum neural fields. J Phys A. 2012;45:033001. View ArticleMathSciNetGoogle Scholar
 Hutt A, Longtin A, SchimanskyGeier L. Additive noiseinduces Turing transitions in spatial systems with application to neural fields and the Swift–Hohenberg equation. Physica D. 2008;237:755–73. View ArticleMATHMathSciNetGoogle Scholar
 Faugeras O, Touboul J, Cessac B. A constructive meanfield analysis of multipopulation neural networks with random synaptic weights and stochastic inputs. Front Comput Neurosci. 2009;3:1. View ArticleGoogle Scholar
 Touboul JD, Ermentrout GB. Finitesize and correlationinduced effects in meanfield dynamics. J Comput Neurosci. 2011;31:453–84. View ArticleMathSciNetGoogle Scholar
 Touboul J, Hermann G, Faugeras O. Noiseinduced behaviors in neural mean field dynamics. SIAM J Appl Dyn Syst. 2012;11:49–81. View ArticleMATHMathSciNetGoogle Scholar
 Bressloff PC, Webber MA. Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst. 2012;11:708–40. View ArticleMATHMathSciNetGoogle Scholar
 Ohira T, Cowan JD. Stochastic neurodynamics and the system size expansion. In: Ellacott S, Anderson IJ, editors. Proceedings of the first international conference on mathematics of neural networks. San Diego: Academic Press. 1997. p. 290–4. Google Scholar
 Buice M, Cowan JD. Fieldtheoretic approach to fluctuation effects in neural networks. Phys Rev E. 2007;75:051919. View ArticleMathSciNetGoogle Scholar
 Bressloff PC. Stochastic neural field theory and the systemsize expansion. SIAM J Appl Math. 2009;70:1488. View ArticleMATHMathSciNetGoogle Scholar
 Bressloff PC. Metastable states and quasicycles in a stochastic Wilson–Cowan model of neuronal population dynamics. Phys Rev E. 2010;85:051903. View ArticleMathSciNetGoogle Scholar
 Buice M, Cowan JD, Chow CC. Systematic fluctuation expansion for neural network activity equations. Neural Comput. 2010;22:377. View ArticleMATHMathSciNetGoogle Scholar
 Doi M. Second quantization representation for classical manyparticle systems. J Phys A. 1976;9:1465–77. View ArticleGoogle Scholar
 Doi M. Stochastic theory of diffusion controlled reactions. J Phys A. 1976;9:1479–95. View ArticleGoogle Scholar
 Peliti L. Path integral approach to birth–death processes on a lattice. J Phys. 1985;46:1469–83. View ArticleMathSciNetGoogle Scholar
 Dykman MI, Mori E, Ross J, Hunt PM. Large fluctuations and optimal paths in chemical kinetics. J Chem Phys. 1994;100:5735. Google Scholar
 Elgart V, Kamenev A. Rare event statistics in reaction–diffusion systems. Phys Rev E. 2004;70:041106. View ArticleMathSciNetGoogle Scholar
 Escudero C, Kamanev A. Switching rates of multistep reactions. Phys Rev E. 2009;79:041149. View ArticleGoogle Scholar
 Bressloff PC, Newby JM. Metastability in a stochastic neural network modeled as a velocity jump Markov process. SIAM J Appl Dyn Syst. 2013;12:1394–435. View ArticleMATHMathSciNetGoogle Scholar
 Keener JP, Newby JM. Perturbation analysis of spontaneous action potential initiation by stochastic ion channels. Phys Rev E. 2011;84:011918. View ArticleGoogle Scholar
 Newby JM, Keener JP. An asymptotic analysis of the spatially inhomogeneous velocityjump process. SIAM J Multiscale Model Simul. 2011;9:735–65. View ArticleMATHMathSciNetGoogle Scholar
 Newby JM. Isolating intrinsic noise sources in a stochastic genetic switch. Phys Biol. 2012;9:026002. View ArticleGoogle Scholar
 Newby JM, Bressloff PC, Keener JP. Breakdown of fast–slow analysis in an excitable system with channel noise. Phys Rev Lett. 2013;111:128101. View ArticleGoogle Scholar
 Bressloff PC, Newby JM. Stochastic hybrid model of spontaneous dendritic NMDA spikes. Phys Biol. 2014;11:016006. View ArticleGoogle Scholar
 Bressloff PC, Newby JM. Path integrals and large deviations in stochastic hybrid systems. Phys Rev E. 2014;89:042701. View ArticleGoogle Scholar
 Bressloff PC, Faugeras O. On the Hamiltonian structure of large deviations in stochastic hybrid systems. Submitted 2015. Google Scholar
 Buice M, Chow CC. Beyond mean field theory: statistical field theory for neural networks. J Stat Mech Theory Exp. 2013;2013:P03003. View ArticleMathSciNetGoogle Scholar
 Buice M, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Comput Biol. 2013;9:e1002872. View ArticleMathSciNetGoogle Scholar
 Buice M, Chow CC. Generalized activity equations for spiking neural network dynamics. Front Comput Neurosci. 2013;7:162. View ArticleGoogle Scholar
 Grimmett GR, Stirzaker DR. Probability and random processes. 3rd ed. Oxford: Oxford University Press; 2001. Google Scholar
 Ermentrout GB. Reduction of conductancebased models with slow synapses to neural nets. Neural Comput. 1994;6:679–95. View ArticleGoogle Scholar
 Haangi P, Grabert H, Talkner P, Thomas H. Bistable systems: master equation versus Fokker–Planck modeling. Z Phys B. 1984;28:135. Google Scholar
 Kifer Y. Large deviations and adiabatic transitions for dynamical systems and Markov processes in fully coupled averaging. Mem Am Math Soc. 2009;201(944):1–129. MathSciNetGoogle Scholar
 Faggionato A, Gabriell D, Crivellari MR. Averaging and large deviation principles for fullycoupled piecewise deterministic Markov processes and applications to molecular motors. Markov Process Relat Fields. 2010;16:497–548. MATHGoogle Scholar
 Faggionato A, Gabrielli D, Ribezzi Crivellari M. Nonequilibrium thermodynamics of piecewise deterministic Markov processes. J Stat Phys. 2009;137:259–304. View ArticleMATHMathSciNetGoogle Scholar
 Graham R, Tel T. On the weaknoise limit of Fokker–Planck models. J Stat Phys. 1984;35:729–48. View ArticleMATHMathSciNetGoogle Scholar
 Lugo CA, McKane AJ. Quasicycles in a spatial predator–prey model. Phys Rev E. 2008;78:051911. View ArticleMathSciNetGoogle Scholar
 Biancalani T, Fanelli D, Di Patt F. Stochastic Turing patterns in the Brusselator model. Phys Rev E. 2010;81:046215. View ArticleGoogle Scholar
 Butler TC, Goldenfeld N. Robust ecological pattern formation induced by demographic noise. Phys Rev E. 2009;80:030902(R). View ArticleGoogle Scholar
 Butler TC, Goldenfeld N. Fluctuationdriven Turing patterns. Phys Rev E. 2011;84:011112. View ArticleGoogle Scholar
 Woolley TE, Baker RE, Gaffney EA, Maini PK. Stochastic reaction and diffusion on growing domains: understanding the breakdown of robust pattern formation. Phys Rev E. 2011;84:046216. View ArticleGoogle Scholar
 Schumacher LJ, Woolley TE, Baker RE. Noiseinduced temporal dynamics in Turing systems. Phys Rev E. 2013;87:042719. View ArticleGoogle Scholar
 McKane AJ, Biancalani T, Rogers T. Stochastic pattern formation and spontaneous polarization: the linear noise approximation and beyond. Bull Math Biol. 2014;76:895–921. View ArticleMATHMathSciNetGoogle Scholar
 Ermentrout GB, Cowan JD. A mathematical theory of visual hallucination patterns. Biol Cybern. 1979;34:137–50. View ArticleMATHMathSciNetGoogle Scholar
 Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener M. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos Trans R Soc Lond B. 2001;356:299–330. View ArticleGoogle Scholar
 Webber M, Bressloff PC. The effects of noise on binocular rivalry waves: a stochastic neural field model: invited contribution. J Stat Mech. 2013;3:P03001. MathSciNetGoogle Scholar
 Kilpatrick ZP, Ermentrout GB. Wandering bumps in stochastic neural fields. SIAM J Appl Dyn Syst. 2013;12:61–94. View ArticleMATHMathSciNetGoogle Scholar
 Kilpatrick ZP. Coupling layers regularizes wave propagation in stochastic neural fields. Phys Rev E. 2014;89:022706. View ArticleGoogle Scholar
 Faugeras O, Inglis J. Stochastic neural field theory: a rigorous footing. J Math Biol. 2015. doi:10.1007/s0028501408076. Google Scholar
 Kruger M, Stannat W. Front propagation in stochastic neural fields: a rigorous mathematical framework. 2014. arXiv:1406.2675v1.
 Kuehn C, Reidler MG. Large deviations for nonlocal stochastic neural fields. J Math Neurosci. 2014;4:1. View ArticleMathSciNetGoogle Scholar
 Martin PC, Siggia ED, Rose HA. Statistical dynamics of classical systems. Phys Rev A. 1973;8:423–37. View ArticleGoogle Scholar
 Sompolinsky H, Zippelius A. Dynamic theory of the spin glass phase. Phys Rev Lett. 1981;47:359. View ArticleGoogle Scholar
 Crisanti A, Sompolinsky H. Dynamics of spin systems with randomly asymmetric bonds: Ising spins and Glauber dynamics. Phys Rev A. 1988;37:4865. View ArticleMathSciNetGoogle Scholar