- Research
- Open
- Published:

# Path Integral Methods for Stochastic Differential Equations

*The Journal of Mathematical Neuroscience (JMN)***volume 5**, Article number: 8 (2015)

## Abstract

Stochastic differential equations (SDEs) have multiple applications in mathematical neuroscience and are notoriously difficult. Here, we give a self-contained pedagogical review of perturbative field theoretic and path integral methods to calculate moments of the probability density function of SDEs. The methods can be extended to high dimensional systems such as networks of coupled neurons and even deterministic systems with quenched disorder.

## Introduction

In mathematical neuroscience, stochastic differential equations (SDE) have been utilized to model stochastic phenomena that range in scale from molecular transport in neurons, to neuronal firing, to networks of coupled neurons, to cognitive phenomena such as decision making [1]. Generally these SDEs are impossible to solve in closed form and must be tackled approximately using methods that include eigenfunction decompositions, WKB expansions, and variational methods in the Langevin or Fokker–Planck formalisms [2–4]. Often knowing what method to use is not obvious and their application can be unwieldy, especially in higher dimensions. Here we demonstrate how methods adapted from statistical field theory can provide a unifying framework to produce perturbative approximations to the moments of SDEs [5–13]. Any stochastic and even deterministic system can be expressed in terms of a path integral for which asymptotic methods can be systematically applied. Often of interest are the moments of $x(t)$ or the probability density function $p(x,t)$. Path integral methods provide a convenient tool to compute quantities such as moments and transition probabilities perturbatively. They also make renormalization group methods available when perturbation theory breaks down. These methods have been recently applied at the level of networks and to more general stochastic processes [14–25].

Although Wiener introduced path integrals to study stochastic processes, these methods are not commonly used nor familiar to much of the neuroscience or applied mathematics community. There are many textbooks on path integrals but most are geared towards quantum field theory or statistical mechanics [26–28]. Here we give a pedagogical review of these methods specifically applied to SDEs. In particular, we show how to apply the response function method [29, 30], which is particularly convenient to compute desired quantities such as moments.

The main goal of this review is to present methods to compute actual quantities. Thus, mathematical rigor will be dispensed for convenience. This review will be elementary and self-contained. In Sect. 2, we cover moment generating functionals, which expand the definition of generating functions to cover distributions of functions, such as the trajectory of a stochastic process. We continue in Sect. 3 by constructing functional integrals appropriate for the study of SDEs, using the Ornstein–Uhlenbeck process as an example. Section 4 introduces the concept of Feynman diagrams as a tool for carrying out perturbative expansions and introduces the “loop expansion,” a tool for constructing semiclassical approximations. We carry out a perturbative calculation explicitly for the stochastically forced FitzHugh–Nagumo equation in Sect. 5. Finally, Sect. 6 provides the connection between SDEs and equations for the density $p(x,t)$ such as Fokker–Planck equations.

## Moment Generating Functionals

The strategy of path integral methods is to derive a generating function or functional for the moments and response functions for SDEs. The generating functional will be an infinite dimensional generalization for the familiar generating function for a single random variable. In this section we review moment generating functions and show how they can be generalized to functional distributions.

Consider a probability density function (PDF) $P(x)$ for a single real variable *x*. The moments of the PDF are given by

and can be obtained directly by taking derivatives of the generating function

where *J* is a complex parameter, with

Note that in explicitly including $Z(0)$ we are allowing for the possibility that $P(x)$ is not normalized. This freedom will be convenient especially when we apply perturbation theory. The generating function is a Laplace transform of the PDF and is well defined if the PDF is locally integrable.

For example, the generating function for a Gaussian PDF, $P(x)\propto e^{-(x-a)^{2}/(2G)}$, with mean *a* and variance *G*, is

The integral can be computed by completing the square so that the exponent of the integrand can be written as a perfect square

Expanding both sides and equating coefficients yields $x_{c}=JG+a$, $A=1/2G$ and $B=J^{2}G/2+Ja$. The integral in (1) can then be computed to obtain

where

is a normalization factor. The mean of *x* is then given by

The cumulant generating function is defined as

so that the cumulants are

In the Gaussian case

yielding $\langle x \rangle_{C} =\langle x \rangle= a$ and $\langle x^{2} \rangle _{C}\equiv\operatorname{var}(x)=\langle x^{2} \rangle-\langle x\rangle^{2}= G$, and $\langle x^{n}\rangle_{C}=0$, $n>2$.

The generating function can be generalized to an *n*-dimensional vector $x=\{x_{1},x_{2},\ldots,x_{n}\}$ to become a generating functional that maps the *n*-dimensional vector $J=\{J_{1},J_{2},\dots,J_{n}\}$ to a real number with the form

where $G^{-1}_{jk}\equiv (G^{-1})_{jk}$ and we use square brackets to denote a functional. This integral can be solved by transforming to orthonormal coordinates, which is always possible if $G^{-1}_{jk}$ is symmetric, as it is for a well-defined probability density. Hence, let $\omega_{\alpha}$ and $v^{\alpha}$ be the *α*th eigenvalue and orthonormal eigenvector of $G^{-1}$, respectively, i.e.

and

Now, expand *x* and *J* in terms of the eigenvectors with

Hence

Since the Jacobian is 1 for an orthonormal transformation the transformed generating functional in terms of the transformed parameter *d* is

Transforming back yields

where $Z[0]=(2\pi)^{n/2}\sqrt{\det{G}}$.

The moments are given by

However, since the exponent is quadratic in the components $J_{l}$, only even powered moments are nonzero. From this we can deduce that

which is known as Wick’s theorem. Any Gaussian moment about the mean can be obtained by taking the sum of all the possible ways of “contracting” two of the variables. For example,

In the continuum limit, a generating functional for a function $x(t)$ on the real domain $t\in[0,T]$ is obtained by taking a limit of the generating functional for the vector $x_{j}$. Let the interval $[0,T]$ be divided into *n* segments of length *h* so that $T=nh$ and $x(jh)=x_{j}$, $j\in\{0,1,\dots,n\}$. We then take the limit of $n\rightarrow\infty$ and $h\rightarrow0$ preserving $T= n h$. We similarly identify $J_{j}\rightarrow J(t)$ and $G_{jk}\rightarrow G(s,t)$ and obtain

where the measure for integration

is over functions. Although $Z[0]=\lim_{n\rightarrow\infty}(2\pi)^{n/2}(\det G)^{1/2}$ is formally infinite, the moments of the distributional are well defined. The integral is called a path integral or a functional integral. Note that $Z[J]$ refers to a functional that maps different “forms” of the function $J(t)$ over the time domain to a real number. Defining the functional derivative to obey all the rules of the ordinary derivative with

the moments again obey

For example,

We can further generalize the generating functional to describe the probability distribution of a function $\varphi(\vec{x})$ of a real vector $\vec{x}\in R^{n}$, instead of a single variable *t* with

Historically, computing moments and averages of a probability density functional of a function of more than one variable is called field theory. In general, the probability density functional is usually written in exponential form

where $S[\varphi]$ is called the action and the generating functional is often written as

where

For example, the action given by

is called $\varphi^{4}$ (“*φ*-4”) theory.

The analogy between stochastic systems and quantum theory, where path integrals are commonly used, is seen by transforming the time coordinates in the path integrals via $t \rightarrow\sqrt{-1}t$. When the field *φ* is a function of a single variable *t*, then this would be analogous to single particle quantum mechanics where the quantum amplitude can be expressed in terms of a path integral over a configuration variable $\phi(t)$. When the field is a function of two or more variables $\varphi(\vec{r},t)$, then this is analogous to quantum field theory, where the quantum amplitude is expressed as a path integral over the quantum field $\varphi(\vec{r},t)$.

## Application to SDE

Building on the previous section, here we derive a generating functional for SDEs. Consider a Langevin equation,

on the domain $t\in[0,T]$ with initial condition $x(t_{0})=y$. The stochastic forcing term $\eta(t)$ obeys $\langle\eta(t) \rangle= 0$ and $\langle\eta(t) \eta(t') \rangle= \delta(t - t')$. Equation (3) is to be interpreted as the Ito stochastic differential equation

where $W_{t}$ is a Wiener process (i.e. Gaussian white noise), and $x_{t}$ is the value of *x* at time *t*. We show how to generalize to other stochastic processes later. Functions *f* and *g* are assumed to obey all the properties required for an Ito SDE to be well posed [31]. In particular, the stochastic increment $dW_{t}$ does not depend on $f(x_{t},t)$ or $g(x_{t},t)$ (i.e. $x_{t}$ is adapted to the filtration generated by the noise). The choice between Ito and Stratonovich conventions amounts to a choice of the measure for the path integrals, which will be manifested in a condition on the linear response or “propagator” that we introduce below.

The goal is to derive a probability density functional (PDF) and moment generating functional for the stochastic variable $x(t)$. For the path integral formulation, it is more convenient to take $x(t_{0})=0$ in (3) and enforce the initial condition with a source term so that

where $\mathbf{1}_{t_{0}}(t)= 1$ when $t=t_{0}$ (i.e. indicator function). The discretized form of (5) with the Ito interpretation for small time step *h* is given by

where $j\in\{0,1,\dots,N\}$, $x_{j}=x(jh+t_{0})$, $T= Nh$, $f_{j}=f(x_{j},jh+t_{0})$, $g_{j}=g(x_{j},jh+t_{0})$, $\delta_{j,k}$ is the Kronecker delta, $x_{0}=0$, and $w_{j}$ is a normally distributed discrete random variable with $\langle w_{j}\rangle=0$ and $\langle w_{j}w_{k}\rangle=\delta_{j,k}$. We use *x* and *w* without indices to denote the vectors $x = (x_{1}, \dots, x_{N})$ and $w = (w_{0}, w_{1}, \dots, w_{N-1})$. Formally, the PDF for the vector *x* conditioned on *w* and *y* can be written as

i.e. the probability density function is given by the Dirac delta function constrained to the solution of the SDE.

Inserting the Fourier representation of the Dirac delta function,

gives

The PDF is now expressed in exponential form.

For zero-mean unit-variance Gaussian white noise, the PDF of $w_{i}$ is given by

Hence

can be integrated by completing the square as demonstrated in the previous section to obtain

In the limit $h\rightarrow0$, $N\rightarrow\infty$ such that $T=Nh$, we can formally use the notation

with a newly defined complex variable $i k_{j} \rightarrow\tilde{x}(t)$. Although we use a continuum notation for convenience, $x(t)$ need not be differentiable and we interpret the action in terms of the discrete definition. However, all derivatives will be well defined in the perturbative expansions. $P[x(t) | y,t_{0}]$ is a functional of the function $x(t)$ conditioned on two scalars *y* and $t_{0}$. The moment generating functional for $x(t)$ and $\tilde{x}(t)$ is then given by

with action

The probability density functional can be derived directly from the SDE (3) by taking the path integral over the infinite dimensional Dirac delta functional:

yielding the action (8).^{Footnote 1} Owing to the definition $i k_{j} \rightarrow\tilde{x}(t)$ the integrals over $\tilde{x}(t)$ are along the *imaginary* axis, which is why no explicit *i* appears in the action above.

If we perform the $\tilde{x}$ integration in (7) we obtain

where the Jacobian factor $\mathcal{J}$ depends upon the Ito or Stratonovich convention. For Ito, the Jacobian is 1. This is the Onsager–Machlup formula, which is a useful form for many stochastic applications [2, 13, 28]. For example, there is an intimate connection between the action and the rate function of large deviation theory [13, 32]. However, as we show in the following sections, it is more convenient to not integrate over $\tilde{x}$ for diagrammatic perturbation theory.

In a similar manner, we can define the path integral for more general noise processes than the Wiener process. Let $\eta(t)$ instead be a process with cumulant generating functional $W[J(t)]$ so that the cumulants of $\eta(t)$ (which may depend upon $x(t)$) are given by functional derivatives with respect to $J(t)$. This process will have its own action $S_{\eta}[\eta(t)]$ and the path integral can be written as

Noting that

is the definition of the cumulant generating functional for $\eta(t)$, we find that the path integral can be written as

In the cases where the input $\eta(t)$ is delta-correlated in time, we obtain

where we have Taylor expanded the moments of the noise distribution $g_{n}(x)$. For example, for the Wiener process

i.e. $v_{20} = D$ and all other $v_{nm} = 0$.

### Ornstein–Uhlenbeck Process

We first demonstrate the path integral approach to computing moments in SDEs for the Ornstein–Uhlenbeck process

with initial condition $x(0)=y$. The action is

Defining an inverse propagator

the action can be written as

and the generating functional is

This path integral can be evaluated directly as a Gaussian integral since the action is quadratic. However, the generating functional can also be evaluated by expanding the exponent around the “free” action given by $S_{\mathrm{F}}[x(t), \tilde{x}(t)]=\int\tilde{x}(t) G^{-1}(t - t') x(t')\,dt \,dt'$. We will demonstrate this method since it forms the basis for perturbation theory for non-quadratic actions. Expand the integrand of the generating functional as

where

The generating functional is now expressed as a sum of moments of the free action, which are calculated from the free generating functional

Although this integral is similar to (2), there are sufficient differences to warrant an explicit computation. We note again that $\tilde{x}$ is an imaginary variable so this integral corresponds to computing a functional complex Gaussian in two fields. As in Sect. 2, we discretize the time domain with $t\rightarrow t_{k}$, $(x(t_{k}), \tilde{x}(t_{k})) \rightarrow (x_{k}, iy_{k})$, $(J(t_{k}), \tilde{J}(t_{k}))\rightarrow(J_{k}, K_{k}) $,

and $\int \,dt \rightarrow\sum$. The generating functional is then transformed to

In the continuum limit, $dt\rightarrow0$ and $\vert \det G_{kl}^{-1}\vert \rightarrow1$, giving

where $G(t,t')$ is the operator inverse of $G^{-1}(t,t')$, i.e.

This Green’s function equation can be solved to obtain

where $H(t)$ is the left continuous Heaviside step function (i.e. $H(0)=0$, $\lim_{t\rightarrow0^{+}} H(t)=1$ and thus $\lim_{t\rightarrow s^{+}} G(t,s)=1$, $G(s,s)=0$). The choice of $H(0)=0$ is consistent with the Ito condition for the SDE and ensures that the configuration variable $x(t)$ is uncorrelated with future values of the stochastic driving term. Other choices for $H(0)$ represent other forms of stochastic calculus (e.g. $H(0) = 1/2$ is the choice consistent with Stratonovich calculus).^{Footnote 2} Although the generating functional (11) differs from those introduced in Sect. 2 because it is complex and has two source functions *J* and $\tilde{J}$, it still obeys Wick’s theorem.

The free moments are given by

since $Z_{\mathrm{F}}[0,0]=1$. We use a subscript *F* to denote expectation values with respect to the free action. From the action of (11), it is clear the nonzero free moments must have equal numbers of $x(t)$ and $\tilde{x}(t)$ due to Wick’s theorem, which applies here for contractions between $x(t)$ and $\tilde{x}(t)$. For example, one of the fourth moments is given by

Now the generating functional for the OU process (9) can be evaluated. The only surviving terms in the expansion will have equal numbers of $x(t)$ and $\tilde{x}(t)$. Thus only terms with factors of $\int\tilde{x}(t_{0})\tilde{J}(t_{1}) x(t_{1})\,dt_{1}$, $(D/2)\int\tilde {x}^{2}(t_{1}) \tilde{J}^{2}(t_{2})x^{2}(t_{2})\,dt_{1}\,dt_{2}$, and $\int\tilde {J}(t_{1})x(t_{1}) J(t_{2})\tilde{x}(t_{2})\,dt_{1}\,dt_{2}$ (and combinations of the three) will survive. For the OU process, the entire series is summable. First consider the case where $D=0$. Because there must be equal numbers of $\tilde{x}(t)$ and $x(t)$ factors in any nonzero moment due to Wick’s theorem, in this case the generating functional has the form

From Wick’s theorem, the free expectation value in (12) will be a sum over all possible contractions between $x(t)$ and $\tilde {x}(t)$ leading to *m*! combinations. Thus (12) is

which means the series is an exponential function. The other terms in the exponent when $D \ne0$ can be similarly calculated resulting in

The cumulant generating functional is

The only nonzero cumulants are the mean

the response function

and the covariance

Closed-form expressions for the cumulants are obtained by using the solution for the propagator *G*. Hence, the mean is

the response function is

and the covariance is

For $t_{2}\ge t_{1}\ge t_{0}$

For $t_{1}=t_{2}=t$

The generating functional for the OU process could be computed exactly because the SDE could be solved exactly. The advantage of the path integral formulation is that perturbation theory can be applied systematically in the cases where the path integral cannot be completed exactly.

## Perturbative Methods and Feynman Diagrams

If the SDE is nonlinear then the generating functional cannot be computed exactly as in the linear case. However, propagators and moments can be computed perturbatively. The method we use is an infinite dimensional generalization of Laplace’s method for finite dimensional integrals [33]. In fact, the method was used to compute the generating functional for the Ornstein–Uhlenbeck process. The only difference is that for nonlinear SDEs the resulting asymptotic series is not generally summable.

The strategy is again to split the action $S[x,\tilde{x}]= S_{\mathrm{F}} + S_{\mathrm{I}}$, where $S_{\mathrm{F}}$ is called the “free” action and $S_{\mathrm{I}}$ is called the “interacting” action. The generating functional is

The moments satisfy

and the cumulants satisfy

The generating functional is computed perturbatively by expanding the integrand of (17) around the free action

Hence, the generating functional can be expressed in terms of a series of free moments.

There are two types of expansions depending on whether the nonlinearity is small or the noise source is small. The small nonlinearity expansion is called a weak coupling expansion and the small noise expansion is called a semiclassical, WKB, or loop expansion.

### Weak Coupling Expansion

Consider the example nonlinear SDE

where $a >0$, $p\ge0$, and *b* can be of any sign. For example, $p=0$ corresponds to standard additive noise (as in the OU process), while $p=1$ gives multiplicative noise with variance proportional to *x*. The action for this equation is

where we have defined the free action as $S_{\mathrm{F}}[x,\tilde{x}] = \int \,dt \tilde{x}(\dot{x}+ax)$. We first perform the weak coupling expansion explicitly and then show how the computation can be simplified using diagrammatic methods.

The generating functional for this action is

The Taylor expansion of the exponential around the free action gives

Because the free action $S_{\mathrm{F}}$ is bilinear in $\tilde{x}$, *x*, the only surviving terms in the expansion are those with equal numbers of *x* and $\tilde{x}$ factors. Also, because of the Ito condition, $\langle x(t)\tilde{x}(t)\rangle=0$, these pairings must come from *different* terms in the expansion, e.g. the only term surviving from the first line is the very first term, regardless of the value of *p*. All other terms come from the quadratic and higher terms in the expansion. For simplicity in the remainder of this example we limit ourselves to $p=0$. Hence, the expansion includes terms of the form

Note that this not an exhaustive list of terms up to fifth order. Many of these terms will vanish because of the Ito condition. The combinatorial factors arise from the multiple ways of combining terms in the expansion. There are *n*! ways of combining terms at order *n* and terms with *m* repeats are divided by a factor of *m*!.

Completing the Gaussian integrals using Wick’s theorem then yields

where the propagator is given by the free action and obeys

which is solved by

with $H(0)=0$ as before. We also have $Z_{\mathrm{F}}[0,0]=1$.

The moments and cumulants are obtained from (18) and (19) respectively. For example, the mean is given by

The covariance is

The first cumulant is the same as the mean but the second cumulant or covariance is

### Diagrammatic Expansion

As can be seen in this example, the terms in the perturbation series become rapidly unwieldy. However, a convenient means to keep track of the terms is to use Feynman diagrams, which are graphs with edges connected by vertices that represents each term in the expansion of a moment. The edges and vertices represent terms (i.e. interactions) in the action and hence SDE, which are combined according to a set of rules that reproduces the perturbation expansion shown above. These are directed graphs (unlike the Feynman diagrams usually used for equilibrium statistical mechanics or particle physics). The flow of each graph, which represents the flow of time, is directed from right to left, points to the left being considered to be at times after points to the right. The vertices represent points in time and separate into two groups: *endpoint* vertices and *interior* vertices. The moment $\langle\prod_{j=1}^{N} x(t_{j}) \prod_{k=1}^{M} \tilde{x}(t_{k}) \rangle$ is represented by diagrams with *N*
*final* endpoint vertices which represent the times $t_{j}$ and *M*
*initial* endpoint vertices which represent the times $t_{k}$. Interior vertices are determined from terms in the action.

Consider the interacting action expressed as the power series

where *n* and *m* cannot both be ≤1 (those terms are part of the free action). (Nonpolynomial functions in the action are expanded in a Taylor series to obtain this form.) There is a vertex type associated with each $V_{nm}$. The moment $\langle\prod_{j=1}^{N} x(t_{j}) \prod_{k=1}^{M} \tilde{x}(t_{k}) \rangle $ is given by a perturbative expansion of free action moments that are proportional to $\langle\prod_{j=1}^{N} x(t_{j}) \prod_{k=1}^{M} \tilde {x}(t_{k}) V(N_{v}) \rangle_{\mathrm{F}}$ where $V(N_{v})$ represents a product of $N_{v}$ vertices. Each term in this expansion corresponds to a graph with $N_{v}$ interior vertices. We label the *k*th vertex with time $t_{k}$. As indicated in equation (25), there is an integration over each such interior time point, over the interval $(t_{0}, \infty)$. The interaction $V_{nm}$ produces vertices with *n* edges to the left of the vertex (towards increasing time) and *m* edges to the right of the vertex (towards decreasing times). Edges between vertices represent propagators that arise from an application of Wick’s theorem and thus every $\tilde {x}(t')$ must be joined by a factor of $x(t)$
*in the future*, i.e. $t>t'$, because $G(t,t') \propto H(t-t')$. Also, since $H(0) = 0$ by the Ito condition, each edge must connect two *different* vertices. All edges must be connected, a vertex for the interaction $V_{nm}$ must connect to *n* edges on the left and *m* edges on the right.

Hence, terms at the $N_{v}$th order of the expansion for the moment $\langle\prod_{j=1}^{N} x(t_{j}) \prod_{k=1}^{M} \tilde{x}(t_{k}) \rangle$ are given by directed Feynman graphs with *N* final endpoint vertices, *M* initial endpoint vertices, and $N_{v}$ interior vertices with edges joining all vertices in all possible ways. The sum of the terms associated with these graphs is the value of the moment to $N_{v}$th order. Figure 1 shows the vertices applicable to action (20) with $p=0$. Arrows indicate the flow of time, from right to left. These components are combined into diagrams for the respective moments. Figure 2 shows three diagrams in the sum for the mean and second moment of $x(t)$. The entire expansion for any given moment can be expressed by constructing the Feynman diagrams for each term. Each Feynman diagram represents an integral involving the coefficients of a vertex and propagators. The construction of these integrals from the diagram is encapsulated in the Feynman rules:

(A) For each vertex interaction $V_{nm}$ in the diagram, include a factor of $-\frac{v_{nm}}{n!}$. The minus sign enters because the action appears in the path integral with a minus sign.

(B) If the vertex type, $V_{nm}$ appears *k* times in the diagram, include a factor of $\frac{1}{k!}$.

(C) For each edge between times *t* and $t'$, there is a factor of $G(t,t')$.

(D) For *n* distinct ways of connecting edges to vertices that yield the same diagram, i.e. the same topology, there is an overall factor of *n*. This is the combinatoric factor from the number of different Wick contractions that yield the same diagram.

(E) Integrate over the times *t* of each interior vertex over the domain $(t_{0}, \infty)$.

The diagrammatic expansion is particularly useful if the series can be truncated so that only a few diagrams need to be computed. The weak coupling expansion is straightforward. Suppose one or more of the vertices is associated with a small parameter *α*. Each appearance of that particular vertex diagram contributes a factor of *α* and the expansion can be continued to any order in *α*. The expansion is also generally valid over all time if $G(t,t')$ decays exponentially for large $t-t'$ but can break down if we are near a critical point and $G(t,t')$ obeys a power law. We consider the semiclassical expansion in the next section where the small parameter is the noise strength.

Comparing these rules with the diagrams in Fig. 2, one can see the terms in the expansions in equations (23) and (24), with the exception of the middle diagram in Fig. 2b. An examination of Fig. 2a shows that this middle diagram consists of two copies of the first diagram of the mean. Topologically, the diagrams have two forms. There are connected graphs and disconnected graphs. The disconnected graphs represent terms that can be completely factored into a product of moments of lower order (cf. the middle diagram in Fig. 2b). Cumulants consist only of connected graphs since the products of lower ordered moments are subtracted by definition. The connected diagrams in Fig. 2 lead to the expressions (23) and (24). In the expansion (21), the terms that do not include the source factors *J* and $\tilde{J}$ only contribute to the normalization $Z[0,0]$ and do not affect moments because of (18). In quantum field theory, these terms are called vacuum graphs and consist of closed graphs, i.e. they have no initial or trailing edges. In the cases we consider, all of these terms are 0 if we set $Z[0,0] = 1$.

### Semiclassical Expansion

Recall that the action for the general SDE (3) is

where we make explicit a small noise parameter *D*, while *f* and *g* are of order one. Now, rescale the action with the transformation $\tilde{x} \rightarrow\tilde{x}/D$ to obtain

The generating functional then has the form

In the limit as $D\rightarrow0$, the integral will be dominated by the critical points of the exponent of the integrand. In quantum theory, these critical points correspond to the “classical” equations of motion (mean field theory in statistical mechanics). Hence, an asymptotic expansion in small *D* corresponds to a semiclassical approximation. In both quantum mechanics and stochastic analysis this is also known as a WKB expansion. According to the Feynman rules for such an action, each diagram gains a factor of *D* for each edge (internal or external) and a factor of $1/D$ for each vertex. Let *E* be the number of external edges, *I* the number of internal edges, and *V* the number of vertices. Then each connected graph now has a factor $D^{I+E-V}$. It can be shown via induction that the number of closed loops *L* in a given connected graph must satisfy $L = I - V + 1$ [26]. To see this, note that for diagrams without loops any two vertices must be connected by at most one internal edge since more than one edge would form a closed loop. Since the diagrams are connected we must have $V = I + 1$ when $L = 0$. Adding an internal edge between any two vertices increases the number of loops by precisely one. Thus we see that the total factor for each diagram may be written $D^{E + L - 1}$. Since the number of external edges is fixed for a given cumulant, the order of the expansion scales with the number of loops.

We can organize the diagrammatic expansion in terms of the number of loops in the graphs. Not surprisingly, the semiclassical expansion is also called the loop expansion. For example, as seen in Fig. 2a the graph for the mean has one external edge and thus to lowest order (graph with no loop), there are no factors of *D*, while one loop corresponds to the order *D* term. The second cumulant or variance has two external edges and thus the lowest order tree level term is order *D* as seen in Fig. 2b. Loop diagrams arise because of nonlinearities in the SDE that couple to moments of the driving noise source. The middle graph in Fig. 2a describes the coupling of the variance to the mean through the nonlinear $x^{2}$ term. This produces a single-loop diagram which is of order *D*, compared to the order 1 “tree” level mean graph. Compare this factor of *D* to that from the tree level diagram for the variance, which is order *D*. This same construction holds for higher nonlinearities and higher moments for general theories. The loop expansion is thus a series organized around the magnitude of the coupling of higher moments to lower moments.

The loop expansion implies that for each order of *D* in the expansion, all diagrams with the same number of loops must be included. In some cases, this could be an infinite number of diagrams. However, one can still write down an expression for the expansion because it is possible to write down the sum of all of these graphs as a set of self-consistent equations. For example, consider the expansion of the mean for action (20) for the case where $D=0$ (i.e. no noise term). The expansion will consist of the sum of all tree level diagrams. From Eq. (23), we see that it begins with

In fact, this expansion will be the perturbative expansion for the solution of the ordinary differential equation obtained by discarding the stochastic driving term. Hence, the sum of all tree level diagrams for the mean must satisfy

Similarly, the sum of the tree level diagrams for the linear response, $\langle x(t) \tilde{x}(t') \rangle_{\mathrm{tree}} = G_{\mathrm{tree}}(t,t')$, is the solution of the linearization of (27) around the mean solution with a unit initial condition at $t=t'$, i.e. the propagator equation

The semiclassical approximation amounts to a small noise perturbation around the solution to this equation. We can represent the sum of the tree level diagrams graphically by using bold edges, which we call “classical” edges, as in Fig. 3. We can then use the classical edges within the loop expansion to compute semiclassical approximations to the moments of the solution to the SDE.

Consider the case of $p=0$ in action (20). The one-loop semiclassical approximation of the mean is given by the sum of the first two graphs in Fig. 2a with the thin edges replaced by bold edges. For the covariance, the first graph in Fig. 2b suffices, again with thin edges replaced by bold edges. These graphs are equivalent to the equations:

and

This approximation is first order in *D* for the mean (one loop) and covariance (tree level).

Now consider the one-loop corrections when $p=1$ in action (20). First consider the linear response, $\langle x(t) \tilde {x}(t') \rangle$. For simplicity, we will assume the initial condition $y=0$. In this case, the vertex in Fig. 1d now appears as in Fig. 4. The linear response $\langle x(t) \tilde{x}(t') \rangle$ will be given by the sum of all diagrams with one entering edge and one exiting edge. At tree level, there is only one such graph, equal to $G(t,t')$, given in (22). At one-loop order, we can combine the vertices in Figs. 1b and 1d to get the second graph shown in Fig. 5 to obtain

This loop correction arises because of two types of vertices. There are vertices that we call “branching” (as in Fig. 4), which have more exiting edges then entering edges. The opposite case occurs for those vertices which we call “aggregating.” Noise terms in the SDE produce branching vertices. As can be seen from the structure of the Feynman diagrams, all moments can be computed exactly when the deterministic part of the SDE is linear because it only involves convolving the propagator (i.e. Green’s function) of the deterministic part of the SDE with the driving noise term, as in the case of the OU process above. On the other hand, nonlinearities give rise to aggregating vertices.

Combining the diagrams in Figs. 1 and 4, the analog to Eqs. (29) and (30) for $p=1$ are

and

Using the definition of $G_{\mathrm{tree}}(t,t')$, the self-consistent semiclassical approximation for $\langle x(t) \rangle$ to one-loop order is

or

The semiclassical approximation known as the “linear noise” approximation takes the tree level computation for the mean and covariance. The formal way of deriving these self-consistent equations is via the *effective action*, which is beyond the scope of this review. We refer the interested reader to [26].

An alternative way of deriving the loop expansion equations is to examine deviations away from the mean directly by transforming to a new variable $z=x-\bar{x}$ where $\bar{x}\stackrel{\mathrm{def}}{=} \langle x(t) \rangle_{\mathrm{tree}}$ satisfies the SDE with zero noise (27). The action then becomes

The propagator for this action is now given immediately by (28). At tree level, $\langle z \rangle_{\mathrm{tree}}=0$ by definition. At order *D*, the mean is given by the second diagram in Fig. 2a, which immediately gives (29) and (31). Likewise, the variance will be given by the diagram in Fig. 1d leading to (30) and (32).

## FitzHugh–Nagumo Model

These methods can also be applied to higher dimensions. As an example, consider the noise-driven FitzHugh–Nagumo neuron model:

where *v* is the neuron potential, *w* is a recovery variable, *a*, *b*, and *c* are positive parameters, *D* is the noise amplitude, and initial conditions are $v_{0}$, $w_{0}$. We will consider a semiclassical or WKB expansion in terms of *D*. We wish to compute the means, variances, and covariance for *v* and *w* as a loop expansion in *D*.

We first must formally construct a joint PDF for variables *v* and *w*. As both equations in the system must be satisfied simultaneously, we write

which leads to the action

We now transform to deviations around the mean with $\nu= v-V$ and $\omega= w -W$, with $V\stackrel{\mathrm{def}}{=}\langle v\rangle_{\mathrm{tree}}$ and $W\stackrel{\mathrm{def}}{=}\langle w \rangle_{\mathrm{tree}}$, where

The transformed action is

which we can rewrite as

where

The propagator

satisfies

or

The Feynman diagrams for the four propagators (39)–(42) and the two vertices in the action (38) are shown in Fig. 6.

The mean of *v* is $\langle v \rangle= V + \langle\nu\rangle$ and the mean of *w* is $\langle w \rangle= W+ \langle\omega\rangle$. The diagrams for $\langle\nu\rangle$, and $\langle\omega\rangle$ are shown in Fig. 7. $\langle\nu\rangle$ is given by joining the two vertex diagrams in Figs. 6b and 6c to obtain Fig. 7a, which is topologically equivalent to the middle diagram in Fig. 2a:

where the subscript indicates that this is an ensemble average of $\nu (t)$ in the domain $[t_{0},t]$. $\langle\omega\rangle$ is given by the same diagram as $\langle\nu\rangle$ except that propagator $G_{\omega}^{v}(t,t_{2})$ replaces $G_{\nu}^{v}(t,t_{2})$:

The diagrams for the variances and covariances (two-point cumulants) are shown in Fig. 8 The variance of *v* is $\langle\nu(t)\nu(t')\rangle_{C}$ in Fig. 8a is found by using Fig. 6c adjoined to two $G_{\nu}^{v}$ propagators:

The variance of *w* is $\langle\omega(t)\omega(t')\rangle_{C}$ in Fig. 8b is also given by Fig. 6c but adjoined to two $G_{\omega}^{v}$ propagators:

Finally, the covariance of *v* and *w* is $\langle\nu(t)\omega (t')\rangle_{C}$ in Fig. 8c is given by Fig. 6c adjoined to the $G_{\nu}^{v}$ and $G_{\omega}^{v}$ propagators:

To evaluate these expressions we first solve the deterministic equations (37) to obtain $V(t)$ and $W(t)$. We then use $V(t)$ in (39)–(42) and solve for the four propagators, which go into the expressions for the moments. When the solutions of (37) are fixed points, then we can find closed-form solutions for all the equations. Otherwise, we may need to solve some of the equations numerically. However, instead of having to average over many samples of the noise distribution, we only need to solve a small set of equations once to obtain the moments.

Here we give the example solution for the fixed point given by the solution of

The propagator equations are pairwise coupled and thus can easily be solved by Laplace transforms or any other means to obtain

where $r=(1/2)(V^{2}-1+b c)$,

We can now insert these propagators into the variance and covariance equations:

These expressions not only capture the stationary values but also transient effects. The integrals can all be performed in closed form and result in long expressions of exponential and trigonometric functions, which we do not include here. Figure 9 shows the comparison between these perturbative estimates and numerical simulations for three different noise values, $D=0.001, 0.01, 0.015$. We used the parameters $a=0.7$, $b=0.8$, $c=0.1$, $I=0$, which give rise to fixed points $V=-1.1994$ and $W=-0.62426$ and propagator parameters $r=0.25928$ and $\varOmega=0.26050$. Numerical simulations are averages over one million samples. The estimates fit the data extremely well for $D=0.001$ but start to break down for $D=0.01$. The reason is that the perturbation expansion is only valid in the vicinity of the fixed point but as the noise strength increases the probability to escape from the fixed point and enter an excitable orbit becomes significant. Phase plane portraits in Fig. 10 show that for $D=0.001$ the orbit stays near the stable fixed point but for the larger noise values, there are large excursions away from the fixed point when the orbit crosses an “escape” threshold. The probability to escape can also be estimated using path integrals and WKB theory as detailed in [13, 25]. Software for performing simulations and integrals is available upon request.

## Connection to Fokker–Planck Equation

In stochastic systems, one is often interested in the PDF $p(x,t)$, which gives the probability density of position *x* at time *t*. This is in contrast with the probability density functional $P[x(t)]$ which is the probability density of all possible functions or paths $x(t)$. Previous sections have been devoted to computing the moments of $P[x(t)]$, which provide the moments of $p(x,t)$ as well. In this section we leverage knowledge of the moments of $p(x,t)$ to determine an equation it must satisfy. In simple cases, this equation is a Fokker–Planck equation for $p(x,t)$.

The PDF $p(x,t)$ can be formally obtained from $P[x(t)]$ by marginalizing over the interior points of the function $x(t)$. Consider the transition probability $U(x_{1},t_{1} | x_{0}, t_{0})$ between two points $x_{0}$, $t_{0}$ and $x_{1}$, $t_{1}$. This is equal to $p(x,t)$ given the initial condition $p(x,t_{0}) = \delta(x - x_{0})$. In terms of path integrals this can be expressed as

where the upper limit in the integral is fixed at $x(t_{1})=x_{1}$ and the lower at $x(t_{0})=x_{0}$. The lower limit appears as the initial condition term in the action and can thus be considered part of $P[x(t)]$. The upper limit on the path integral can be imposed with a functional Dirac delta via

which in the Fourier representation is given by

where the contour for the *J* integral runs along the imaginary axis. This can be rewritten as

in terms of an initial condition centered moment generating function

where the measure $\mathcal{D}x(t)$ is defined such that $Z_{\mathrm{CM}}(0)=1$. Note that this generating function $Z_{\mathrm{CM}}(J)$ is different from the generating functionals we presented in previous sections. $Z_{\mathrm{CM}}(J)$ generates moments of the deviations of $x(t)$ from the initial value $x_{0}$ at a specific point in time *t*. Taylor expanding the exponential gives

where

Inserting into (61) gives

Using the identity

results in

The probability density function $p(x,t)$ obeys

Inserting (62) gives

Expanding $p(x,t+\Delta t)$ in a Taylor series in Δ*t* gives

In the limit $\Delta t\rightarrow0$ we obtain the Kramers–Moyal expansion

where the *jump moments* are defined by

As long as these limits are convergent, then it is relatively easy to see that only connected Feynman graphs will contribute to the jump moments. In addition, we can define $z=x-y$, where *y* is the initial condition, $\tilde{z} = \tilde{x}$ and use the action $S[z(t)+y, \tilde {z}(t)]$. This shift in *x* removes the initial condition term. This means we can calculate the *n*th jump moment by using this shifted action to compute the sum of all graphs with no initial edges and *n* final edges (as in Fig. 1d for $n=2$).

As an example, consider the Ito SDE (3). From the discretization (6), where $h=\Delta t$, it is found that

which yields $D_{1}(x,t)= f(x,t)$, $D_{2}=g(x,t)^{2}$, and $D_{n}=0$ for $n>2$. Thus for the Ito SDE (3), the Kramers–Moyal expansion becomes the Fokker–Planck equation,

We have $D_{n}=0$ for $n>2$ even though there are nonzero contributions from connected graphs to these moments for $n>2$ in general. However, all of these moments require the repeated use of the vertex with two exiting edges; this will cause $D_{n} \propto\Delta t^{m}$ for some $m > 1$ and thus the jump moment will be zero in the limit.

We can envision actions for more general stochastic processes by considering vertices which have more than two exiting edges, i.e. we can add a term to the action of the form

for some *n* and function $h(x)$. This will produce a nonzero $D_{n}$. The PDF for this kind of process will not in general be describable by a Fokker–Planck equation, but we will need the full Kramers–Moyal expansion. If we wished to provide an initial distribution for $x(t_{0})$ instead of specifying a single point, we could likewise add similar terms to the action. In fact, the completely general initial condition term is given by

where $Z_{y}$ is the generating functional for the initial distribution. In other words, the initial state terms in the action are the cumulants of the initial distribution multiplied by the corresponding powers of $\tilde{x}(t_{0})$.

Returning to the Ito process (3), the solution to the Fokker–Planck equation can be obtained directly from the path integral formula for the transition probability (61). Let $\ln Z[J]$ be the cumulant generating function for the moments of $x(t)$ at time *t*. It can be expanded as

yielding

For the Ornstein–Uhlenbeck process the first two cumulants are given in (15) and (16), yielding (assuming initial condition $x(t_{0}) = y$)

## Further Reading

The methods we introduced can be generalized to higher dimensional systems including networks of coupled oscillators or neurons [16, 17, 19, 21–24]. The reader interested in this approach is encouraged to explore the extensive literature on path integrals and field theory. Bressloff [13] covers the connection between the path integral approach and large deviation theory. The reader should be aware that most of the references listed will concentrate on applications and formulations appropriate for equilibrium statistical mechanics and particle physics, which means that they will not explicitly discuss the response function approach we have demonstrated here. For application driven examinations of path integration there is Kleinert [11], Schulman [34], Kardar [27] and Tauber [12]. More mathematically rigorous treatments can be found in Simon [35] and Glimm and Jaffe [36]. For the reader seeking more familiarity with concepts of stochastic calculus such as Ito or Stratonovich integration there are applied approaches [3] and rigorous treatments [37] as well. Zinn-Justin [26] covers a wide array of topics of interest in quantum field theory from statistical mechanics to particle physics. Despite the exceptionally terse and dense presentation, the elementary material in this volume is recommended to those new to the concept of path integrals. Note that Zinn-Justin covers SDEs in a somewhat different manner from that presented here (the Onsager–Machlup integral is derived; although see Chaps. 16 and 17), as does Kleinert. We should also point out the parallel between the form of the action for exponential decay (i.e. $D = 0$ in the OU process) and the holomorphic representation of the harmonic oscillator presented in [26]. The response function formalism was introduced by Martin et al. [29]. Closely related path integral formalisms have been introduced via the work of Doi [5, 6] and Peliti [7] which have been used in the analysis of reaction–diffusion system [8–10, 30]. Uses of path integrals in neuroscience have appeared in [14, 15, 17–21, 23, 25].

## Notes

- 1.
This derivation is, strictly speaking, incorrect because the delta functional fixes the value of $\dot{x}(t)$, not $x(t)$. It works because the Jacobian under a change of variables from $\dot{x}(t)$ to $x(t)$ is 1.

- 2.
This is also a manifestation of the normal-ordering convention chosen for the theory. Zinn-Justin [26] refers to this as the “$\epsilon(0)$ problem.”

## References

- 1.
Tuckwell HC. Stochastic processes in the neurosciences. vol. 56. Philadelphia: SIAM; 1989.

- 2.
Risken H. The Fokker–Planck equation: methods of solution and applications. 2nd ed. vol. 18. New York: Springer; 1996.

- 3.
Gardiner CW. Handbook of stochastic methods: for physics, chemistry, and the natural sciences. 3rd ed. Springer series in synergetics. Berlin: Springer; 2004.

- 4.
Van Kampen NG. Stochastic processes in physics and chemistry. 3rd ed. Amsterdam: Elsevier; 2007.

- 5.
Doi M. J Phys A, Math Gen. 1976;9:1465.

- 6.
Doi M. J Phys A, Math Gen. 1976;9:1479.

- 7.
Peliti L. J Phys. 1985;46:1469.

- 8.
Janssen H-K, Tauber UC. Ann Phys. 2005;315:147.

- 9.
Cardy J. Renormalization group approach to reaction–diffusion problems. Review article. cond-mat/9607163.

- 10.
Cardy J. Field theory and nonequilibrium statistical mechanics. Review article. Année acad’emique 1998-99, semestre d’été.

- 11.
Kleinert H. Path integrals in quantum mechanics, statistics polymer physics, and financial markets. Singapore: World Scientific; 2004.

- 12.
Tauber U. Critical dynamics: a field theory approach to equilibrium and non-equilibrium scaling behavior. Cambridge: Cambridge University Press; 2014.

- 13.
Bressloff PC. Stochastic processes in cell biology. New York: Springer; 2014.

- 14.
Buice MA. PhD thesis. University of Chicago. 2005.

- 15.
Buice MA, Cowan JD. Phys Rev E. 2007;75:051919.

- 16.
Hildebrand E, Buice M, Chow C. Phys Rev Lett. 2007;98:054101.

- 17.
Buice MA, Chow CC. Phys Rev E, Stat Nonlinear Soft Matter Phys. 2007;76:031118.

- 18.
Buice MA, Cowan JD. Prog Biophys Mol Biol. 2009;99:53.

- 19.
Buice MA, Cowan JD, Chow CC. Neural Comput. 2010;22:377.

- 20.
Bressloff PC. SIAM J Appl Math. 2009;70:1488.

- 21.
Buice MA, Chow CC. Phys Rev E. 2011;84:051120.

- 22.
Buice MA, Chow CC. J Stat Mech Theory Exp. 2013;2013:P03003.

- 23.
Buice MA, Chow CC. PLoS Comput Biol. 2013;9:e1002872.

- 24.
Buice MA, Chow CC. Front Comput Neurosci. 2013;7:162.

- 25.
Bressloff PC, Newby JM. Phys Rev E, Stat Nonlinear Soft Matter Phys. 2014;89:042701.

- 26.
Zinn-Justin J. Quantum field theory and critical phenomena. 4th ed. Oxford: Oxford Science Publications; 2002.

- 27.
Kardar M. Statistical physics of fields. Cambridge: Cambridge University Press; 2007.

- 28.
Chaichian M, Demichev AP. Path integrals in physics. Institute of Physics: Bristol. 2001.

- 29.
Martin PC, Siggia ED, Rose HA. Phys Rev A. 1973;8:423.

- 30.
Tauber UC, Howard M, Vollmayr-Lee BP. J Phys A, Math Gen. 2005;38:R79.

- 31.
Øksendal BK. Stochastic differential equations: an introduction with applications. 6th ed. Berlin: Springer; 2007.

- 32.
Bressloff P, Faugeras O. arXiv:1410.2152 (2014).

- 33.
Bender CM, Orszag SA. Advanced mathematical methods for scientists and engineers. New York: Springer; 1999.

- 34.
Schulman L. Techniques and applications of path integration. New York: Dover; 2005.

- 35.
Simon B. Functional integration and quantum physics. Providence: AMS; 2005.

- 36.
Glimm J, Jaffe A. Quantum physics: a functional integral point of view. New York: Springer; 1981.

- 37.
Karatzas I, Shreve S. Brownian motion and stochastic calculus. New York: Springer; 1991.

## Acknowledgements

This research was supported by the Intramural Research Program of the NIH, NIDDK.

## Author information

## Additional information

### Competing Interests

The authors declare that they have no competing interests.

### Authors’ Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

## Rights and permissions

**Open Access** This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

## About this article

#### Received

#### Accepted

#### Published

#### DOI