 Research
 Open Access
 Published:
Rendering neuronal state equations compatible with the principle of stationary action
The Journal of Mathematical Neuroscience volume 11, Article number: 10 (2021)
Abstract
The principle of stationary action is a cornerstone of modern physics, providing a powerful framework for investigating dynamical systems found in classical mechanics through to quantum field theory. However, computational neuroscience, despite its heavy reliance on concepts in physics, is anomalous in this regard as its main equations of motion are not compatible with a Lagrangian formulation and hence with the principle of stationary action. Taking the Dynamic Causal Modelling (DCM) neuronal state equation as an instructive archetype of the firstorder linear differential equations commonly found in computational neuroscience, we show that it is possible to make certain modifications to this equation to render it compatible with the principle of stationary action. Specifically, we show that a Lagrangian formulation of the DCM neuronal state equation is facilitated using a complex dependent variable, an oscillatory solution, and a Hermitian intrinsic connectivity matrix. We first demonstrate proof of principle by using Bayesian model inversion to show that both the original and modified models can be correctly identified via in silico data generated directly from their respective equations of motion. We then provide motivation for adopting the modified models in neuroscience by using three different types of publicly available in vivo neuroimaging datasets, together with open source MATLAB code, to show that the modified (oscillatory) model provides a more parsimonious explanation for some of these empirical timeseries. It is our hope that this work will, in combination with existing techniques, allow people to explore the symmetries and associated conservation laws within neural systems – and to exploit the computational expediency facilitated by direct variational techniques.
Introduction
Virtually all of modern physics has been formulated in terms of the principle of stationary action, from Maxwell’s equations in electromagnetism [1], to the Einstein field equations in the general theory of relativity [2], through to the Dirac equation in quantum mechanics [3]. This approach has many advantages. Firstly, coupled sets of equations of motion can be described in terms of a single Lagrangian function, allowing for a parsimonious unified mathematical framework – as was famously demonstrated via the single Lagrangian formulation of the entire standard model of particle physics [4]. Lagrangian formulations also allow for otherwise inaccessible insights into dynamical systems, given that they uncover otherwise hidden symmetries and associated conservation laws [5]. Furthermore, direct variational techniques present potentially important applications in the analysis of dynamical systems, given that they allow for equations of motion to be bypassed entirely [6, 7] – therefore greatly expediting the execution times of forward models.
In classical physics, dynamical systems are framed in terms of equations of motion describing quantities such as position, velocity, and acceleration. On the other hand, the equations of motion used in computational neuroscience refer to more abstract quantities, such as membrane potentials, firing rates, and macroscopic neuronal activity [8]. Variational techniques have been used in conjunction with neural networks, regarding the construction of path integral representations of stochastic dynamics. These techniques elucidate the systematic corrections to meanfield results due to stochasticity, and allow the calculation of moments of activity, as well as the application of renormalization group methods in critical states [9–12]. These formulations have also been shown to be applicable to disordered systems, for example neuronal networks with randomly drawn connectivity [13].
This paper comprises three sections.
In the first section, using the Dynamic Causal Modelling (DCM) neuronal state equation [14] as an archetype of the firstorder linear state equations used in computational neuroscience, we show that it is possible to make certain modifications to its mathematical form that allow for a Lagrangian formulation. Specifically, three such modifications are found to be necessary: (a) the dependent variable must be complex; (b) the lefthand side of the state equation must be multiplied by the imaginary unit i – a modification that fundamentally alters the model by changing the solutions from nonoscillatory to oscillatory; and (c) the intrinsic coupling matrix must be Hermitian. We stress that the original (unmodified) state equation cannot be recovered from the Lagrangian formulation – only the modified complex, oscillatory form allows for compatibility with the principle of stationary action.
In the second section, we provide proof of principle by demonstrating that both the original (noncomplex, nonoscillatory) and modified (complex, oscillatory) neuronal state equations can be correctly identified. To do this, we generate two sets of synthetic data – one using the original model and one using the modified equation of motion. For each of these two datasets, we then use Bayesian model inversion to evaluate the respective variational free energy (model evidence); using both the original, as well as the modified equations of motion. We find that the variational free energy correctly assigns the original data to the equation of motion that generated those data – thus demonstrating that this technique can disambiguate between the genesis of these data: i.e., that the implicit models are identifiable – and that the (complex, oscillatory) modification has a material effect on observed dynamics.
In the third section, we show that the modified equation of motion provides higher model evidence in publicly available datasets obtained using three different neuroimaging techniques – electroencephalography (EEG), functional nearinfrared spectroscopy (fNIRS), and electrocorticography (ECoG). These numerical analyses underwrite the generalisability of – and provide empirical motivation for – adopting the modified state equation in certain cases. We provide opensource MATLAB code that reproduces the results presented in this paper – both for the synthetic and experimental datasets in question.
Main text
The DCM neuronal state equation
A generic nonlinear dynamical system can be expressed in terms of a Taylor series expansion [15], which in its simplest form, for the ith region, is expressed as the linear DCM neuronal state equation:
where z represents the state of the system in question; A is the intrinsic coupling matrix; C is the extrinsic input matrix; \(v =u+ \omega ^{ ( v )}\), where \(\omega ^{ ( v )}\) is a noise term describing random, nonMarkovian fluctuations on external perturbations u; and \(\omega ^{ ( z )}\) is a noise term describing random, nonMarkovian fluctuations on z [16]. Using Eq. (1), we can obtain estimates of latent model parameters in the presence of noise on states via Bayesian model inversion. These model parameters include the ways in which the system in question is connected, both intrinsically and extrinsically, with the surrounding environment.
A failed attempt using real variables
We will now show why complex variables are necessary when casting Eq. (1) – and by extension any firstorder linear differential equation – in the form of a Lagrangian. We will prove this first by contradiction – i.e., we begin by attempting to use real variables and demonstrate that this leads to a nonunique solution. If we insist on using real variables then (as shown below) we must assume that v and ω are constant in time, in which case we can rewrite Eq. (1) as follows:
where \(d_{i} = \sum_{j} C_{ij} v_{j} + \omega _{i}^{ ( z )}\) is a constant.
Differentiating Eq. (2) with respect to time we obtain \(\ddot{z}_{i} ( t ) = \sum_{j} A_{ij} \dot{z}_{j} ( t )\) which, together with Eq. (2), gives
where \(E= A^{2}\) and \(f_{i} = \sum_{j} A_{ij} d_{j}\).
We then note that, if the E matrix is symmetric, Eq. (3) can be viewed as the Euler–Lagrange equation:
for the Lagrangian given by
thereby giving a Lagrangian description of Eq. (3).
However, although any solution of the firstorder Eq. (2) is also a solution of the secondorder Euler–Lagrange equation Eq. (4), the reverse is not true: the secondorder Euler–Lagrange equation Eq. (4) also has other solutions that are not valid solutions of the original DCM recovery model Eq. (2). To illustrate the problem, consider a simple example with only one region. The DCM equation simplifies to \(\dot{z} =Az+d\), the general solution of which is \(z= k_{1} e^{At}  \frac{d}{A}\), where \(k_{1}\) is an arbitrary constant of integration. The general solution of the corresponding Euler–Lagrange equation, \(\ddot{z} = A^{2} z+Ad\), is \(z= k_{1} e^{At} + k_{2} e^{At}  \frac{d}{A}\), where \(k_{1}\) and \(k_{2}\) are both arbitrary constants. This is just one example of a general problem: if one differentiates a firstorder differential equation, the general solution of the resulting secondorder ordinary differential equation depends on two arbitrary constants – not one. Every solution of the firstorder equation must also be a solution of the secondorder equation, but most of the solutions of the secondorder equation do not satisfy the firstorder equation. Therefore, it is not possible to cast the neuronal state equation Eq. (1) in the form of a Lagrangian in a way that allows either for the modelling of timedependent external inputs and noise, or for unique recovery via the principle of stationary action.
The complex oscillatory equation of motion
We will now demonstrate that it is possible to cast the neuronal state equation Eq. (1) in the form of a Lagrangian as long as the following three conditions are met: (a) the dependent variable z is complex; (b) the lefthand side is multiplied by the imaginary unit i – thus rendering the solutions oscillatory; and (c) the A and C matrices in Eq. (1) are Hermitian. This modified neuronal state equation is written as follows:
The neuronal state Lagrangian
We are now able to put forward the central proposition of this paper, which is that that Eq. (6) can be derived from the following Lagrangian:
where we use star notation to indicate complex conjugation and have assumed that the external perturbations \(v_{j}\) and noises \(\omega _{j}^{(z)}\) are real and, as the A and C matrices are Hermitian, the Lagrangian is also real.
We will show that this proposed Lagrangian is correct by verifying that the original equation of motion Eq. (6) can be recovered from Eq. (7) via the principle of stationary action. To do this, we temporarily proceed under the assumption that the variables z, \(z^{*}\), ż, and \(\dot{z}^{*}\) are independent of one another – it will become clear below why this is a valid assumption. We can then write the general variation of \(\mathcal{L} ( z, \dot{z,} z^{*}, \dot{z}^{*} )\) as follows:
Since the values of z, ż, \(z^{*}\), \(\dot{z}^{*}\) and their variations δz, δż, \(\delta z^{*}\), \(\delta \dot{z}^{*}\) are all arbitrary, this equation must also hold when we add restrictions by insisting that: (a) \(z^{*}\) must be equal to the complex conjugate of z (and hence \(\delta z^{*}\) must be equal to the complex conjugate of δz); and (b) that ż is the derivative of z (and hence δż is the time derivative of δz). In other words, although we begin by treating z, ż, \(z^{*}\) and \(\dot{z}^{*}\) as independent variables, Eq. (8) must also hold when \(z^{*}\) is the complex conjugate of z and ż is the time derivative of z. Assuming this to be the case from now on, the Lagrange equations of motion are derived in the standard way by looking for functions (“paths”) \(z(t)\) that render the action \(S [ z(t) ] = \int _{t_{i}}^{t_{f}} \mathcal{L} ( z ( t ), \dot{z} ( t ), z^{*} ( t ), \dot{z}^{*} ( t ) )\,dt\) stationary. We consider two related variations of \(z(t)\) given by
where \(\delta\eta (t)\) is any differentiable complex function of time vanishing at the initial time \(t_{i}\) and the final time \(t_{f}\). Using Eq. (8), together with the two variations in Eq. (9) (both of which are general because \(\delta \eta (t)\) is a general variation), the principle of stationary action tells us that:
where we divide the second equation by i and add/subtract it to/from the first to give
Finally, we integrate the second terms in Eq. (11) by parts, noting that the boundary terms vanish because \(\delta \eta =0\) at \(t_{i}\) and \(t_{f}\), to obtain
Since δη is arbitrary, the Euler–Lagrange equations
follow directly from Eq. (12).
Evaluating the necessary derivatives of our proposed Lagrangian in Eq. (7):
shows that the corresponding Euler–Lagrange equations are
i.e., we recover the complex oscillatory DCM neuronal state equation Eq. (6) and its adjoint. Because the A and C matrices are Hermitian, these two equations are complex conjugates of each other – i.e., whenever one is true, so is the other and we do not need to solve them separately. It is for this reason that we must include an imaginary unit i in the Lagrangian formulation – the addition flips the signs of the \(\frac{\partial }{\partial t}\) terms in the complex conjugate of the Lagrangian in Eq. (7) and hence renders the derivatives with respect to z and \(z^{*}\) complex conjugates of one another. Note that, unlike the example case using real variables in Eq. (2) through Eq. (5), the use of complex variables allows for the modified state equation Eq. (6) to be uniquely recovered in a way that allows for both timedependent external inputs and noise terms.
The neuronal state Hamiltonian
The Hamiltonian \(\mathcal{H}\) is related to the Lagrangian via the Legendre transform: \(\mathcal{H}= \sum_{k} \frac{\partial \mathcal{L}}{\partial \dot{z}_{k}} z_{k} \mathcal{L}\), where, in the case of Eq. (7), the summation is taken over the two variables z and \(z^{*}\), such that
Note that the neuronal system is influenced by timedependent external perturbations v and noise ω. The timetranslation invariance of the Lagrangian that leads, via Noether’s theorem [5], to the principle of energy conservation therefore does not hold and the value of the Hamiltonian function (energy) is not conserved. However, we can consider a nondissipative version of the neuronal state equation and its adjoint:
for which, by comparison with Eqs. (7) and (16), the associated Lagrangian and Hamiltonian are given by
To show that the Hamiltonian is indeed conserved, we differentiate it in time as follows:
which, using Eqs. (13), (17), and (18), reads
i.e., energy does not change in time, meaning that it is conserved by virtue of time translational invariance in the underlying equations of motion Eqs. (17) and (18).
Simulations
Here, we consider a network consisting of three connected nodes, with the first node receiving an external driving input (Fig. 1(A)). This external input provides a Gaussian ‘bump’ function of peristimulus time (Fig. 1(A) – inset). The network is intrinsically connected with symmetric offdiagonal elements and negative leading diagonal components. This initializes the system with stable dynamics because the eigenvalues of the Jacobian all have negative real components (Fig. 1(B)). We then specify the priors of the implicit coupling matrix by setting all offdiagonal coupling parameters to zero and by setting the leading diagonal elements to \( {1} / {4}\) (Fig. 1(C)) – see the accompanying code for all priors and settings. We then use the original state equation Eq. (1) to run the model forward and generate synthetic data (Fig. 1(D)). Next, we change the dependent variable so that it comprises two components – one real \(\operatorname{Re} ( t )\) and one imaginary \(\operatorname{Im} ( t )\) – and multiply the lefthand side by the imaginary unit i, such that we now deal with the modified state equation Eq. (6) written as
Here, we equate the real and imaginary components gives the following two equations:
where we use an observation equation that retains the real component (see accompanying code) to generate synthetic data via the modified model (Fig. 1(E)).
We then performed four separate model inversions – using dynamic expectation maximization (DEM) [17] – to retrieve posterior estimates of the intrinsic connectivities for: (a) the original data with the original model (Fig. 1(F)); (b) the original data with the modified model (Fig. 1(G)); (c) the modified data with the original model (Fig. 1(H)); and (d) the modified data with the modified model (Fig. 1(I)). Note that Fig. 1(I) shows that the priors in Fig. 1(C) are closely recovered – however, not perfectly, as seen by running the accompanying code. We can now compare the variational free energies (and associated probabilities) for (a) and (b) – showing that the original data is better fitted using the original model (Fig. 1(J)). Similarly, we can compare the variational free energies (and associated probabilities) for (c) and (d) – showing that the modified data is better explained by the modified model (Fig. 1(K)). As such, we demonstrate proof of principle by showing that the model inversion can correctly identify which model was used to generate the data. Finally, we show that when we change Eq. (23) such that external driving inputs and noise are excluded:
the Hamiltonian in Eq. (19) is indeed constant in time when we run the model forward with a symmetric intrinsic coupling matrix furnished with the posteriors obtained from Fig. 1(I) (Fig. 1(L)).
Experimental data
Here, we use the same model inversion techniques presented in Fig. 1, except – that instead of synthetic data we use publicly available empirical timeseries collected using EEG [18] (Fig. 2(A)), fNIRS [19] (Fig. 2(B)), and ECoG [20] (Fig. 2(C)) neuroimaging modalities. Instead of the Gaussian function used in Fig. 1, we here use a random external input connected to all three channels, to model ongoing neuronal fluctuations. We also show the estimated timeseries for these datasets using both the original (Fig. 2(D), (E), (F)) and the modified (Fig. 2(G), (H), (I)) models.
Using the same priors as for the synthetic data in Fig. 1, Bayesian model inversion based on the original state equation Eq. (1) furnished posterior estimates of the intrinsic connectivity between the first three channels for the EEG (Fig. 2(J)), fNIRS (Fig. 2(K)), and ECoG (Fig. 2(L)) data. We repeated this procedure by using the modified state equation Eq. (6) to obtain different posterior estimates for the EEG (Fig. 2(M)) fNIRS (Fig. 2(N)), and ECoG (Fig. 2(O)) data. We can then compare the variational free energy – i.e., the model evidence that balances the tradeoff between accuracy and complexity – for the original vs. the modified state equation for the EEG (Fig. 2(P)), fNIRS (Fig. 2(Q)), and ECoG (Fig. 2(R)) data, together with the associated probabilities (Figs. 2(P), (Q), (R), insets). In these multimodal examples, the modified (complex, oscillatory) state equation provides a better account (higher variational free energy) for the EEG and fNIRS datasets, but not for the ECoG dataset. Note that the extent of the difference in variational free energies between the original and modified models is determined by the noise parameters in the accompanying DCM code – e.g., the precision and prior variance. Finally, we show that the Hamiltonians are conserved for the nondissipative cases using the posteriors of the EEG (Fig. 2(S)), FNIRS (Fig. 2(T)), and ECoG (Fig. 2(U)) models.
Both Figs. 1 and 2 can be reproduced in full via the accompanying code.
Conclusions
The aim of our work was to show how one of the firstorder linear equations that dominate computational neuroscience can be rendered compatible with the principle of stationary action with a minimum number of simple modifications. Furthermore, we wanted to show how to embed the Lagrangian model within a data fitting framework in a way that allows it to be readily applied to timeseries of arbitrary dimensionality from any neuroimaging modality.
By making the modifications outlined in this paper, we obtain a state equation that is fundamentally different from the original state equation, with the main difference lying in the solutions to the equations.
To see why this is the case, we can take the simplest case of a single unconnected region in the absence of external driving inputs or noise as modelled by the original (unmodified) state equation:
where c is a constant of integration and \(c'= e^{c}\). We therefore see that the solution increases exponentially in time (or decreases if the original differential equation has a minus sign). Let us now consider the modified state equation for this simple case, as obtained by changing the dependent variable to a complex number and multiplying the lefthand side by the imaginary unit:
The solution now oscillates in time and Eq. (26), unlike Eq. (25), is compatible with the principle of stationary action. We then note that the modified state equation can be reformulated by writing the complex dependent variable z in terms of its real x and imaginary y components such that: \(z=x+iy\) and hence:
where we can equate the real and imaginary components to give the following two equations:
We see that the modified state equation is just another way of representing a simple harmonic oscillator. It should therefore come as no surprise that the modification facilitates compatibility with the principle of stationary action, as harmonic oscillators are readily described by Lagrangian formulations. Furthermore, the fact that the modified form provides a more parsimonious description in some neuroimaging datasets may be indicative of the underlying oscillatory equation capturing the intrinsic oscillations present in neural systems across scales [21].
The imaginary unit i famously appears in the Schrödinger equation \(i\hslash \dot{\psi } =H\psi \), which we see possesses the same mathematical structure as the computational neuroscience models considered here: it is first order in time and linearly proportional to the dependent variable. One of the reasons Schrödinger introduced the imaginary unit into his equation is that it preserves unitarity, guaranteeing that the value of the integral \(\int \psi ^{*} ( \boldsymbol{r},t ) \psi ( \boldsymbol{r},t ) \,d^{3} r\) is independent of time. The value of this integral is the probability that the particle is found somewhere in space and must therefore be conserved on physical grounds (particles do not appear or disappear in nonrelativistic quantum theory); the i in the Schrödinger equation ensures this conservation mathematically. Every eigenvector component of ψ oscillates in time like the function \(e^{i\omega t}\), while the corresponding component of \(\psi ^{*}\) oscillates like \(e^{i\omega t}\). The conservation of probability follows because \(e^{i\omega t} e^{ i\omega t} =1\). On the other hand, we can consider the consequences of calculating a probability in this way when the imaginary unit i is not present: \(\psi ( t ) \psi ^{*} ( t ) \sim e^{\omega t} e^{\omega t} = e^{2 \omega t} \), i.e. the probability is time dependent and is therefore not conserved.
The preservation of unitarity has potentially interesting consequences regarding neural systems if we are modelling probabilistic quantities such as the chance of observing neural activation beyond a certain threshold. In this case, the presence of the imaginary unit i in the modified equations of motion could imply a conservation of neuronal firing or depolarisation in the system, which could in turn be plausibly maintained by the balance between excitation and inhibition [22, 23]. Together with the need for oscillatory solutions, we showed that compatibility between the neuronal state equation and the principle of stationary action necessitated the use of complex variables. A further advantage of working with complex variables is that the adjacency or coupling matrix A can be transformed into a Hermitian form, where the real parts describe dissipation and the complex parts describe oscillatory or solenoidal dynamics [24, 25] that underwrite rhythms in the brain [26] – we refer the reader to the section entitled “Dynamics and statistical physics” in [27] for a comprehensive treatment. More generally, the ability to work with nondissipative solenoidal dynamics means that one can eschew detailed balance and characterise (neuronal) systems in terms of their nonequilibrium steady states [28–30].
There is an established method for constructing an action for a classical system, often referred to as the MSRDJ (Martin–Siggia–Rose–DeDominicis–Janssen) formalism [31]. This can be applied to arbitrary systems of differential equations of first order but differs from our approach in important ways and addresses different questions. The MSRDJ formalism applies to stochastic differential equations such as the Langevin equation, which include random noise terms (as does the neuronal state equation). Given a set of initial conditions, the solution of a stochastic differential equation depends on the realization of the noise as a function of time, with different realizations producing different solutions. Viewed as a whole, the stochastic differential equation therefore generates a timeevolving probability distribution of solutions, the form of which depends on the nature of the noise. The MSRDJ approach shows how this probability distribution evolves in time and allows one to calculate moments, correlation functions, and other noiseaveraged statistical properties as functions of time. The extremum of the MSRDJ action yields the most likely path of the system in the presence of the random noise. It should therefore be stressed that the formalism presented in this paper is not the only way in which a Lagrangian formulation of a firstorder linear equation can be obtained – see, for example, the review by Chow et al. [10].
Our approach, by contrast, formulates the complex neuronal state equation in Lagrangian terms, and the corresponding Euler–Lagrange equation reproduces the original differential equation exactly, including the noise terms. When we solve the complex neuronalstate Euler–Lagrange equation, we are therefore evolving a specific solution of the stochastic differential equation, not a probability distribution of solutions. The MSDRJ approach yields access to properties such as the most likely path of escape from a potential well in the sense of large deviations [32–34]. MSDRJ is also useful in terms of finding solutions for particular realizations of the noise – e.g., for comparison with unaveraged experimental timeseries.
The conservation laws uncovered by inspecting the symmetries of the Lagrangian – such as the timetranslation invariance that led to the conservation of the value of the Hamiltonian in Eqs. (19), (20), and (21) – are difficult to interpret at this stage, as they have yet to be mapped onto biological mechanisms: this remains a subject for future research to be explored in combination with previous techniques for analysing symmetries in dynamical systems [35]. The purpose of this work was to demonstrate simple techniques by which researchers can create Lagrangian neuronal state models, in a way that allows for symmetries and associated conservation laws to be identified in neural systems.
Availability of data and materials
All relevant code and data is made available at the following public repository: https://github.com/allavailablepubliccode/Lagrangian.
Abbreviations
 DCM:

dynamic causal modelling
 DEM:

dynamic expectation maximization
 EEG:

electroencephalography
 fNIRS:

functional nearinfrared spectroscopy
 ECoG:

electrocorticography
References
 1.
Landau LD. The classical theory of fields. Amsterdam: Elsevier; 2013.
 2.
Feynman R, Morinigo F, Wagner W. Feynman lectures on gravitation. Reading: AddisonWesley; 1995.
 3.
Shankar R. Principles of quantum mechanics. Berlin: Springer; 2012.
 4.
Cottingham WN, Greenwood DA. An introduction to the standard model of particle physics. Cambridge: Cambridge University Press; 2007.
 5.
Noether E. Invariante variationsprobleme. In: Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen Mathematischphysikalische Klasse. 1918. p. 235–57.
 6.
Gray C, Karl G, Novikov V. Direct use of variational principles as an approximation technique in classical mechanics. Am J Phys. 1996;64(9):1177–84.
 7.
Gray C, Karl G, Novikov V. Progress in classical and quantum variational principles. Rep Prog Phys. 2004;67(2):159.
 8.
Breakspear M. Dynamic models of largescale brain activity. Nat Neurosci. 2017;20(3):340–52.
 9.
Buice MA, Cowan JD. Fieldtheoretic approach to fluctuation effects in neural networks. Phys Rev E. 2007;75(5):051919.
 10.
Chow CC, Buice MA. Path integral methods for stochastic differential equations. J Math Neurosci. 2015;5(1):8.
 11.
Hertz JA, Roudi Y, Sollich P. Path integral methods for the dynamics of stochastic and disordered systems. J Phys A, Math Theor. 2016;50(3):033001.
 12.
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Selfconsistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E. 2020;101(4):042124.
 13.
Crisanti A, Sompolinsky H. Path integral approach to random neural networks. Phys Rev E. 2018;98(6):062120.
 14.
Friston KJ, Harrison L, Penny W. Dynamic causal modelling. NeuroImage. 2003;19(4):1273–302.
 15.
Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HE, Breakspear M et al.. Nonlinear dynamic causal models for fMRI. NeuroImage. 2008;42(2):649–62. https://doi.org/10.1016/j.neuroimage.2008.04.262.
 16.
Li B, Daunizeau J, Stephan KE, Penny W, Hu D, Friston K. Generalised filtering and stochastic DCM for fMRI. NeuroImage. 2011;58(2):442–57. https://doi.org/10.1016/j.neuroimage.2011.01.085.
 17.
Friston KJ, TrujilloBarreto N, Daunizeau J. DEM: a variational treatment of dynamic systems. NeuroImage. 2008;41(3):849–85. https://doi.org/10.1016/j.neuroimage.2008.02.054.
 18.
Schalk G, McFarland DJ, Hinterberger T, Birbaumer N, Wolpaw JR. BCI2000: a generalpurpose braincomputer interface (BCI) system. IEEE Trans Biomed Eng. 2004;51(6):1034–43.
 19.
Pfurtscheller G, Bauernfeind G, Wriessnegger SC, Neuper C. Focal frontal (de) oxyhemoglobin responses during simple arithmetic. Int J Psychophysiol. 2010;76(3):186–92.
 20.
Vansteensel MJ, Hermes D, Aarnoutse EJ, Bleichner MG, Schalk G, Van Rijen PC et al.. Brain–computer interfacing based on cognitive control. Ann Neurol. 2010;67(6):809–16.
 21.
Buzsaki G. Rhythms of the brain. London: Oxford University Press; 2006.
 22.
Isaacson JS, How SM. Inhibition shapes cortical activity. Neuron. 2011;72(2):231–43. https://doi.org/10.1016/j.neuron.2011.09.027.
 23.
Xue M, Atallah BV, Scanziani M. Equalizing excitationinhibition ratios across visual cortical neurons. Nature. 2014;511(7511):596–600. https://doi.org/10.1038/nature13321.
 24.
Friston KJ, Kahan J, Razi A, Stephan KE, Sporns O. On nodes and modes in resting state fMRI. NeuroImage. 2014;99:533–47. https://doi.org/10.1016/j.neuroimage.2014.05.056.
 25.
Friston KJ, Fagerholm ED, Zarghami TS, Parr T, Hipólito I, Magrou L, et al. Parcels and particles: Markov blankets in the brain, 2020. https://ui.adsabs.harvard.edu/abs/2020arXiv200709704F. Accessed 1 Jul 2020. arXiv:2007.09704.
 26.
Buzsaki G, Logothetis N, Singer W. Scaling brain size, keeping timing: evolutionary preservation of brain rhythms. Neuron. 2013;80(3):751–64. https://doi.org/10.1016/j.neuron.2013.10.002.
 27.
Friston KJ, Fagerholm ED, Zarghami TS, Parr T, Hipólito I, Magrou L et al.. Parcels and particles: Markov blankets in the brain. Netw Neurosci. 2021;5(1):211–51.
 28.
Friston K. A free energy principle for a particular physics. 2019. https://ui.adsabs.harvard.edu/abs/2019arXiv190610184F. Accessed 01 June 2019. arXiv:1906.10184.
 29.
Yan H, Zhao L, Hu L, Wang X, Wang E, Wang J. Nonequilibrium landscape theory of neural networks. Proc Natl Acad Sci USA. 2013;110(45):E4185–E4194. https://doi.org/10.1073/pnas.1310692110.
 30.
Kwon C, Ao P. Nonequilibrium steady state of a stochastic system driven by a nonlinear drift force. Phys Rev E. 2011;84(6):061106. https://doi.org/10.1103/PhysRevE.84.061106.
 31.
Martin PC, Siggia E, Rose H. Statistical dynamics of classical systems. Phys Rev A. 1973;8(1):423.
 32.
Baule A, Sollich P. Optimal escape from metastable states driven by nonGaussian noise. 2015. arXiv preprint. arXiv:1501.00374.
 33.
Bressloff PC. Stochastic neural field theory and the systemsize expansion. SIAM J Appl Math. 2010;70(5):1488–521.
 34.
Bressloff PC, Faugeras O. On the Hamiltonian structure of large deviations in stochastic hybrid systems. J Stat Mech Theory Exp. 2017;2017(3):033206.
 35.
Canet L, Chaté H, Delamotte B, Wschebor N. Nonperturbative renormalization group for the Kardar–Parisi–Zhang equation. Phys Rev Lett. 2010;104(15):150601.
Acknowledgements
The authors would like to acknowledge support from the Data to Early Diagnosis and Precision Medicine Industrial Strategy Challenge Fund, UK Research and Innovation (UKRI), the National Institute for Health Research (NIHR), the Biomedical Research Centre at South London, the Maudsley NHS Foundation Trust, and King’s College London.
Funding
EDF was supported by a King’s College London Prize Fellowship; KJF was funded by a Wellcome Principal Research Fellowship (Ref: 088130/Z/09/Z); RJM was funded by the Wellcome/EPSRC Centre for Medical Engineering (Ref: WT 203148/Z/16/Z); RL was funded by the MRC (Ref: MR/R005370/1).
Author information
Affiliations
Contributions
All authors conceived of the analysis and wrote the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fagerholm, E.D., Foulkes, W.M.C., Friston, K.J. et al. Rendering neuronal state equations compatible with the principle of stationary action. J. Math. Neurosc. 11, 10 (2021). https://doi.org/10.1186/s13408021001080
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13408021001080
Keywords
 Stationary action
 Lagrangian
 Computational neuroscience
 Neural state equations