Kernel Reconstruction for Delayed Neural Field Equations
 Jehan Alswaihli^{1, 2}Email authorView ORCID ID profile,
 Roland Potthast^{1, 3},
 Ingo Bojak^{4},
 Douglas Saddy^{4} and
 Axel Hutt^{1, 3}
https://doi.org/10.1186/s1340801800588
© The Author(s) 2018
Received: 26 June 2017
Accepted: 17 January 2018
Published: 5 February 2018
Abstract
Understanding the neural field activity for realistic living systems is a challenging task in contemporary neuroscience. Neural fields have been studied and developed theoretically and numerically with considerable success over the past four decades. However, to make effective use of such models, we need to identify their constituents in practical systems. This includes the determination of model parameters and in particular the reconstruction of the underlying effective connectivity in biological tissues.
In this work, we provide an integral equation approach to the reconstruction of the neural connectivity in the case where the neural activity is governed by a delay neural field equation. As preparation, we study the solution of the direct problem based on the Banach fixedpoint theorem. Then we reformulate the inverse problem into a family of integral equations of the first kind. This equation will be vector valued when several neural activity trajectories are taken as input for the inverse problem. We employ spectral regularization techniques for its stable solution. A sensitivity analysis of the regularized kernel reconstruction with respect to the input signal u is carried out, investigating the Fréchet differentiability of the kernel with respect to the signal. Finally, we use numerical examples to show the feasibility of the approach for kernel reconstruction, including numerical sensitivity tests, which show that the integral equation approach is a very stable and promising approach for practical computational neuroscience.
Keywords
1 Introduction
In recent years, studying the activity of neural tissue and development of mathematical and numerical techniques to understand neural processes has led to improved neural field models. Since the early work of Wilson, Cowan and Amari in the 1970s neural field models have become an effective tool in neuroscience [1–4].
The neural networks occurring in nature are typically complex systems sporting a large variety of properties in space and time. Simplifying their analysis is generally difficult—in particular when one considers the many billions of neurons of the entire human nervous system, where each of these neurons can be considered as a complex biological system by and in itself, cf. [5]. However, neural field models describe these complicated system mathematically in a few equations, essentially by using the large number of neurons to achieve simplification in terms of mass action. Thus these models consider averages of the neural activity as a dynamical variable, and averages of neural properties as parameters. The derivation of neural models from properties of single neurons and their networks, and the analysis of the resulting activity, remains a major focus of current research [1, 6–11].
In this century, there are many papers on the neural field equation with and without delays. Some of the studies provide a framework for the existence, uniqueness and stability of the solutions of the neural field equation such as [8–15], while others consider building effective methods to investigate and assimilate the neural field activities, see for example [16–21] with techniques of Data assimilation and Inverse Problems applied to the case without delays. Recently, Nogaret et al. [7] built a model construction method using an optimization technique to assimilate neural data to determine parameters in a detailed neural model including delay.
A challenge often encountered in the study of living systems is to estimate a spatial connectivity kernel w. In a neural system this connectivity kernel usually corresponds to the synaptic footprint, i.e., the connections from a neuron to others by synapses forming between its branching axon and their dendritic trees. Typically, measurements are available for the activity function u at particular spatial locations, e.g., where neurons are patch clamped or electrodes are placed in the extracellular medium. The task then becomes to derive the spatial connectivity from these experimental data. This approach limits the estimation of connectivity to the set of spatial locations of measurements. In the present work, we propose to improve this conventional approach by studying the inverse problem where the full activity function u is given at each location in a given spatial domain and the underlying spatial connectivity is derived. The problem of having limited measurements is part of subsequent work combining inverse techniques with state estimation techniques. Here, we focus on the problem to reconstruct the kernel w when u is known.
The present work considers neural field models that involve delayed spatial interactions and where the delay may depend on the distance between spatial locations [11, 14, 22]. We will assume that the delay function \(D(r,r')\) between spatial locations r, \(r'\) is known. For instance, this is the case when the delay is linked to the geometry of the problem, e.g., when \(D(r,r')\sim \Vert rr' \Vert \), the distance between the points r and \(r'\) in some domain Ω. This assumption is common in practice, since for direct neural connections the delay is essentially the distance divided by the signal propagation speed, which can be assumed to be a universal constant in a first approximation.
Neural field models consider spatially nonlocal interactions, which may be expressed equivalently either by higher orders of spatial derivatives or by spatial integrals [22, 23]. In the first part of this paper, we will show how the methods used in [12] can be modified to study existence and stability of solutions in a neural field model with delay. The basic idea is to split the integral operators under consideration into parts with positive and negative temporal arguments. As a result we obtain a direct and flexible basic existence proof for a delay neural field equation, which includes a constructive method based on integral equations only. These results have been derived by other authors [8, 10, 11, 24] with more sophisticated techniques, but it is nontrivial that the arguments used for neural fields without delay are applicable to the delay case, and the approach in our Sect. 2, based on several relatively simple functional analytic arguments, is of interest by itself.
Second, we will show that the kernel reconstruction problem for the delay neural field equation can be reformulated into a family of integral equations of the first kind. When several trajectories of neural activity are given, the family of integral equations is vector valued. This turns out to be an illposed problem, for smooth neural activity it is even exponentially ill posed. To formulate stable numerical methods for its solution, we need to employ regularization. Here, we use a spectral approach to classical Tikhonov regularization [25–27]. We then study the sensitivity of the mapping \(u \mapsto w\) showing that its regularized version is Fréchet differentiable, and we calculate the derivative by means of integral equations.
In the third part of the paper, we show by a numerical study that the kernel reconstruction from a delay neural field is feasible. We numerically solve the family of integral equations under consideration by a collocation method and provide a study of reconstructions based on the regularization of the illposed integral operators under consideration. This includes a study of the influence of measurement noise on the reconstruction quality and tests of the role of the regularization parameter.
We start with a concise version of the equations in Sect. 2, and in Sect. 3 prepare our inverse approach by a study of the existence for the delay neural field equation. The central section, Sect. 4, serves to develop a family of integral equations to solve the inverse problem for the delay neural field equation. The numerical realization of the approach is shown in Sect. 5, where we demonstrate that with an appropriate regularization the inverse problem is solvable, i.e., prescribed kernels can be constructed and reconstructed kernels generate a neural environment leading to the prescribed neural behaviour.
2 The Mathematical Model
The existence of solutions to the neural field equation (3) has been investigated in various papers already [10–12]. For example, Potthast and beim Graben [12] provide the proof of existence and its analysis in the case of no delay, i.e. for \(D(r,r')=0\). In addition, Faugeras and Faye [10], in their Theorem 3.2.1, state the general existence of solutions with a reference to the generic theory of delay equations, based on work such as [35]. We also point out the work of Van Gils et al. [8] employing the sun–star calculus for their analysis and [24] in which the local bifurcation theory for delayed neural fields was developed. Here, we develop arguments on how to use the basic functional analytic calculus to work for the delay case as well, with the goal to present a short and elementary approach which is easily accessible.
3 The Delay Neural Field Equation

(H1) \(w(r,\cdot)\in L^{1}(\varOmega)\), \(\forall r \in\varOmega\subset \mathbb{R}^{m}\),

(H2) \(\sup_{r \in\varOmega} \w(r,\cdot)\_{L^{1}(\varOmega)} \leq C_{1} \),

(H3) \(\w(r,\cdot)w(r^{\ast},\cdot)\_{L^{1}(\varOmega)} \rightarrow0\) for \(rr^{\ast} \rightarrow0\).
Theorem 3.1
(Existence)
If the kernel w satisfies (H1)–(H3), and if the delay term D is bounded continuous, i.e., if we have \(D \in\operatorname{BC}(\varOmega\times\varOmega,\mathbb{R}^{+})\), then for any \(T>0\) and for any initial field \(u_{0}\) as given by the initial condition (5) there exists a unique solution \(u \in C^{1} (\varOmega\times[0,T])\) to the delay neural field (3) on \([0,T]\).
Proof
IV. According to the Banach fixedpoint theorem, there is one and only one fixed point \(u^{\ast}\) for the fixedpoint equation (13). We have shown the existence of a unique solution \(u(x,t)\) for all \(t \in[0,\rho]\). Now, the same argument applied to the interval \([\rho,2\rho]\) and subsequent intervals \([2\rho,3\rho]\) etc. in the same way. This leads to the existence and uniqueness result on the interval \([0,T]\). □
Remark
We note that the proof also works when some bounded continuous forcing term \(I(r,t)\), \(r\in\varOmega\), \(t \in[0,T]\), is added to the neural field equation (3). It leads to an additional term in Eq. (13), for which all arguments remain valid.
4 The Inverse Problem of Kernel Reconstruction with Delays
We now come to the kernel reconstruction from given dynamical neural patterns with delay. We first formulate a regularized kernel reconstruction approach based on integral equations in Sect. 4.1, then we carry out a sensitivity analysis in Sect. 4.2.
4.1 Kernel Reconstruction with Delays

the nonlinear activation function \(f: \mathbb {R}\rightarrow \mathbb {R}^{+}\) is known, and

the delay function \(D: \varOmega\times\varOmega\rightarrow[0,c_{T}]\) is given.
Theorem 4.1
Proof
Here, we base our reconstruction on a wellknown result (cf. [21], Theorem 3.1.8) that states that Tikhonov regularization is a regularization scheme in the sense of Definition 3.1.4 of [21], i.e., that if \(f = A(\varphi^{\ast}) \in R(A)\), then \(R_{\alpha}f \rightarrow \varphi^{\ast}\) for \(\alpha\rightarrow0\). If A is not injective, splitting the space into \(N(A)\) and \(N(A)^{\perp}=\overline{A^{\ast}(X)}\), we see by \(w_{r} = Pw_{r} + (IP)w_{r}\) and \(A^{\ast}\) that the convergence of \(R_{\alpha}f\) is towards the projection \(P\varphi^{\ast}\) of \(\varphi^{\ast}\) onto \(N(A)^{\perp}\). In our case, the reconstruction calculates an approximation to \(Pw^{\ast}_{r}\). This completes the proof. □
4.2 Sensitivity Analysis
An important basic question is the influence of noise on the reconstruction. Here, we carry out a sensitivity analysis, i.e. we calculate the Fréchet derivative of the reconstructed kernel with respect to the input function u. Differentiability is obtained in a straightforward manner following [21], Chap. 2.6.
Theorem 4.2
Assume that the activation function f is continuously differentiable with derivative \(f'\) bounded on \(\mathbb {R}\). Then, for each fixed \(\alpha>0\), the regularized reconstruction of the kernel w from input signals u within the framework of the delay neural field equation is continuously Fréchet differentiable with respect to u considered as mapping from \(\operatorname{BC}(\varOmega)\times C^{1}([0,T])\) into \(\operatorname {BC}(\varOmega)\times L^{1}(\varOmega)\). This implies continuity of the mapping of u onto w. The total derivative of \(w_{r}\) with respect to u is obtained by the combination of (46) with (49), (51) and (52).
5 Numerical Examples
The goal of this section is to demonstrate the feasibility of the inverse method for the reconstruction of spatial kernels based on the spatiotemporal neural field activity. We study the feasibility in Sect. 5.1 and the sensitivity with respect to variations in the input function u in Sect. 5.2.
5.1 Feasibility of Kernel Reconstructions
First, we consider a onedimensional manifold embedded in a twodimensional space, illustrating the method for a case with 10,000 degrees of freedom. Then an example involving a twodimensional spatial domain evaluates the method for an inverse problem with more than 200,000 degrees of freedom for the kernel estimation.

if one uses a local kernel \(w(r,r')\) with strong connectivity only in a neighbourhood of r, boundary effects for neurons close to the boundary of the domain will appear, since less neurons are included in a neighbourhood there; whereas

if the activity of neurons close to the boundary is close to zero, usually such boundary effects remain negligible.
Example 1
We implemented the delay neural field equation in MATLAB^{®} based on an Euler method for the timeevolution of the system with zerothorder or firstorder quadrature (rectangular rule or trapezoidal rule) for the integral parts of the integrodifferential equation. For the purposes of studying the kernel reconstruction on a rather short temporal window this simple approach is completely sufficient and does not show any deficiencies compared to higherorder methods for the forward problem, as employed for example in [21, 28, 40].
Parameter values for Example 1. Simulations have been carried out with \(N=101\) equally distributed nodes on the circle, \(N_{t}=50\) time steps, and a time step size \(\Delta t=0.2\) for the inverse problem
\(r_{0}\)  (cos(π),sin(π))  σ  1.0 
\(r_{1}\)  (cos(π/3),sin(π/3))  τ  1.0 
\(r_{2}\)  (cos(−π/3),sin(−π/3))  c  3.0 
As a test, we employ the reconstructed kernel with the same initial condition to calculate a reconstructed neural field \(u_{\mathrm{rec}}(r,t)\) on \((r,t)\in\partial B_{R}\times[0,T]\). The original dynamics is shown in black in Fig. 1, based on the kernel (55) visualized in Fig. 2(a). The reconstructed dynamics is shown in red in Fig. 1, based on the reconstructed kernel visualized in Fig. 2(b). A very good agreement between original and reconstructed solution is observed.
Example 2
Parameter values for Example 2. Simulations have been carried out with \(N=21\times22=462\) nodes, \(N_{t}=30\) time steps with time step size \(\Delta t=0.2\) for the inverse problem. The kernel estimation problem has 213,444 degrees of freedom
\(r_{0}\)  (1.5,3.0)  σ  2.0 
\(r_{1}\)  (4.5,4.5)  τ  1.0 
\(r_{2}\)  (4.5,1.5)  c  2.1 
The kernel \(w(r,r')\) with \(r,r' \in\varOmega\) now lives on a subset \(U:= \varOmega\times\varOmega\) of a fourdimensional space, since Ω is a subset of a twodimensional patch. Visualization of \(w(r,r')\) can be carried out by either fixing \(r'\) and showing a twodimensional surface plot, or by reordering r and \(r'\) into onedimensional vectors, so that \(w(r,r')\) can be displayed in full as a twodimensional surface. The first approach is chosen in Fig. 4(c), where the white star indicates \(r'\). The second approach is shown in Fig. 4(a). Next, we solve the inverse delay neural problem and reconstruct the kernel based on equation (30) regularized as indicated by equations (39) and (42). Again, this is carried out by calculation of ϕ and ψ first according to equations (25) and (26), then solving equation (35) by regularization via equation (39) with the regularization parameter chosen as \(\alpha=0.1\). This choice leads to a reasonable stability of the reconstructions combined with high reconstruction quality, and it has been chosen by trial and error.
Figures 4(c) and 4(d) display the original and the reconstructed kernel column, which represents the impact of the location at the black star to all other spatial locations of the neural patch. The result as displayed in (d) shows that the regularized reconstruction of the delay neural kernel is not perfect. However, it is working well if the field activity reaches specific parts of the neural environment. Otherwise the reconstruction is just zero due to missing input for the reconstruction equations and the regularization chosen here. The regularization penalizes the distance to the zero kernel function. Therefore, the results clearly demonstrate the feasibility of the method.
5.2 Sensitivity with Respect to Functional Input
In this section we will carry out a numerical sensitivity study of our first example to explore the dependence of the kernel reconstructions on the input function u. It complements our sensitivity analysis of Sect. 4.2.
According to Theorem 4.2 we have continuity of \(u\mapsto w\), such that if we lower the error ε for fixed α, we need to have convergence to the reconstructed kernel in the case of no data error. Indeed, we obtain a figure similar to Fig. 6 when we lower the error parameter ε from \(\varepsilon =0.01\) to \(\varepsilon =0.005\) and \(\varepsilon =0.001\), leading to the reconstruction displayed in Fig. 2(b) for \(\varepsilon =0\).
6 Conclusions
The purpose of this work is to develop an integral equation approach for kernel reconstructions in delay neural field equations and to study its practical feasibility. We simulate the activity and evolution of a delayed neural field of Amaritype to develop an effective approach to reconstructing the neural connectivity. As a preparation for the inverse problem, this work includes an explicit study of the solvability of the direct problem of the delayed neural field equation (3). We provide an easily accessible functional analytic approach based on an integral equation and Banach’s fixedpoint theorem.
As our main result, we apply inverse problems techniques to reconstructing the neural kernel assuming that some measurements of the activity \(u(r,t)\) are given. We start by formulating a family of integral equations of the first kind. Since kernel reconstruction is ill posed, we need regularization to obtain stable solutions. As stabilization method we employ the Tikhonov regularization. A sensitivity analysis is carried out, showing that the mapping of the input u to the regularized kernel reconstruction is Fréchet differentiable. The derivative is explicitly calculated based on the integral equation approach.
Finally, we provide numerical examples in one and twodimensional spatial domains. These examples show that the regularized reconstruction of the delay neural kernel is practically feasible. We study the numerical sensitivity, by adding random noise of size ε (testing 1%, 0.1% and 0.01%) and studying the regularized reconstruction with different regularization parameters.
In this work, we assume the delay function D to be given, as it would be the case when the delay is approximately proportional to the distance of the nodes under consideration. If D is unknown, w is known and u is measured, we can solve in equation (3) for \(u(r',tD(r,r'))\) for all r, \(r'\) and t. This is still ill posed, since it involves an integral equation of the first kind, but then the determination of D is reduced to the reconstruction of D from the knowledge of \(u(r',tD(r,r'))\), which strongly depends on the form of the signal u and conditions we impose on D. If neither the delay D nor w would be given, the kernel \(K_{r}\) of operator \(V_{r}\) would be unknown and part of the reconstruction, leading to many open questions of feasibility and observability. In general, the reconstruction of both the kernel w and the delay D is an important nonlinear, far reaching and challenging problem of future research.
In summary, we have developed a stable and efficient approach for the reconstruction of the connectivity in neural systems based on delay neural field equations. We expect the approach to be extensible to a wide range of field models with delay, and in particular to be highly useful for analyses of experimental data in the domain of computational neuroscience. These methods allow for the reconstruction of the underlying ‘synaptic footprint’ of connectivity from available neural activity measurements, thus providing a basis for simulation and prediction of real phenomena in the neurosciences.
For largescale problems a conjugategradient method is used for solving the equation sequentially. For smaller problems matrix inversion by Gauss’ method is sufficient.
Declarations
Acknowledgements
The first author would like to thank the Libyan Government which funded her research under the Libyan Embassy Ref No. 10128, Grant 393/12.
Availability of Data and Materials
Data sharing not applicable to this article as no datasets were generated or analysed during the current study. Please contact the author for data requests.
Funding
Not applicable.
Authors’ Contributions
RP, IB and AH, DS and JA interaction on formulation of setup, design of examples, idea and proofreading. JA and RP core proofs, writing and programming. All authors read and approved the final manuscript.
Ethics Approval and Consent to Participate
Not applicable.
Competing Interests
The authors declare that they have no competing interests.
Consent for Publication
Not applicable.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Coombes S, beim Graben P, Potthast R, Wright J. Neural fields: theory and applications. Berlin: Springer; 2014. View ArticleMATHGoogle Scholar
 Wilson HR, Cowan JD. Excitatory and inhibitatory interactions in localized populations of model neurons. Biophys J. 1972;12:1–24. View ArticleGoogle Scholar
 Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thelamic nervous tissue. Kybernetik. 1973;13:55–80. View ArticleMATHGoogle Scholar
 Amari S. Dynamics of patterns formation in lateralinhibition type neural fields. Biol Cybern. 1977;27:77–87. MathSciNetView ArticleMATHGoogle Scholar
 Boeree CG. The neurons. General Psychology. 2015. p. 1–6. Google Scholar
 Bressloff PC, Coombes S. Physics of the extended neuron. Int J Mod Phys B. 1997;11:2343–92. View ArticleGoogle Scholar
 Nogaret A, Meliza CD, Margoliash D, Abarbanel HDI. Automatic construction of predictive neuron models through large scale assimilation of electrophysiological data. Sci Rep. 2016;6:32749. View ArticleGoogle Scholar
 Gils S, Janssens SG, Kuznetsov Y, Visser S. On local bifurcations in neural field models with transmission delays. J Math Biol. 2013;66:837–87. MathSciNetView ArticleMATHGoogle Scholar
 Venkov NA. Dynamics of neural field models [PhD thesis]. 2008. Google Scholar
 Faye G, Faugeras O. Some theoretical and numerical results for delayed neural field equations. Physica D. 2010;239:561–78. MathSciNetView ArticleMATHGoogle Scholar
 Atay FM, Hutt A. Stability and bifurcations in neural fields with finite propagation speed and general connectivity. SIAM J Appl Math. 2005;65(2):644–66. MathSciNetView ArticleMATHGoogle Scholar
 Potthast R, beim Graben P. Existence and properties of solutions for neural field equations. Math Methods Appl Sci. 2010;33:935–49. MathSciNetMATHGoogle Scholar
 Venkov NA, Coombes S, Matthews PC. Dynamic instabilities in scalar neural field equations with spacedependent delays. Physica D. 2007;232:1–15. MathSciNetView ArticleMATHGoogle Scholar
 Veltz R, Faugeras O. Stability of the stationary solutions of neural field equations with propagation delays. J Math Neurosci. 2011;1:1. MathSciNetView ArticleMATHGoogle Scholar
 Veltz R, Faugeras O. A center manifold result for delayed neural fields equations. SIAM J Math Anal. 2013;45(3):1527–62. MathSciNetView ArticleMATHGoogle Scholar
 beim Graben P, Potthast R. Inverse problems in dynamic cognitive modeling. Chaos, Interdiscip J Nonlinear Sci. 2009;19:015103. MathSciNetView ArticleMATHGoogle Scholar
 Freitag MA, Potthast RWE. Synergy of inverse problems and data assimilation techniques. In: Large scale inverse problems. Radon series on computational and applied mathematics. 2013. p. 1–54. Google Scholar
 Potthast R. Inverse problems and data assimilation for brain equations—state and current challenges. 2015. Google Scholar
 Potthast R. Inverse problems in neural population models. In: Encyclopedia of computational neuroscience. 2013. Google Scholar
 Potthast R, beim Graben P. Dimensional reduction for the inverse problem of neural field theory. Front Comput Neurosci. 2009;3:17. View ArticleMATHGoogle Scholar
 Nakamura G, Potthast R. Inverse modeling: an introduction to the theory and methods of inverse problems and data assimilation. Bristol: IOP Publishing; 2015. View ArticleMATHGoogle Scholar
 Hutt A. Generalization of the reactiondiffusion, SwiftHohenberg, and KuramotoSivashinsky equations and effects of finite propagation speeds. Phys Rev E. 2007;75:026214. MathSciNetView ArticleGoogle Scholar
 Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C. Modeling elactrocortical activity through improved local approximations of integral neural field equations. Phys Rev E. 2007;76:051901. MathSciNetView ArticleGoogle Scholar
 Dijkstra K, van Gils SA, Janssens SG. PitchforkHopf bifurcations in 1D neural field models with transmission delays. Physica D. 2015;297:88–101. MathSciNetView ArticleGoogle Scholar
 Engl HW, Hankle M, Neubauer A. Regularization of inverse problems. Mathematics and its applications. Dordrecht: Springer; 2000. Google Scholar
 Groetsch CW. Inverse problems in the mathematical sciences. Theory and practice of applied geophysics series. Wiesbaden: Vieweg; 1993. View ArticleMATHGoogle Scholar
 Kress R. Linear integral equations. Applied mathematical sciences. vol. 82. New York: Springer; 1999. MATHGoogle Scholar
 Coombes S, beim Graben P, Potthast R. Tutorial on neural field theory. In: Neural fields: theory and applications. 2014. View ArticleGoogle Scholar
 James MP, Coombes S, Bressloff PC. Effects of quasioctive membrane on multiply periodic travelling waves in integrateandfire systems. 2003. Google Scholar
 Laing CR, Coombes S. The importance of different timings of excitatory and inhibitory pathways in neural field models. 2005. Google Scholar
 Bojak I, Liley DT. Axonal velocity distributions in neural field equations. PLoS Comput Biol. 2010;6(1):e1000653. MathSciNetView ArticleGoogle Scholar
 Coombes S, Schmidt H. Neural fields with sigmoidal firing rates: approximate solutions. Nottingham e Prints. 2010. Google Scholar
 Rankin J, Avitabil D, Baladron J, Faye G, Lloyd DJ. Continuation of localised coherent structures in nonlocal neural field equations. 2013. arXiv:1304.7206.
 Bressloff PC, Kilpatrick ZP. Twodimensional bumps in piecewise smooth neural fields with synaptic depression. SIAM J Appl Math. 2011;71:379–408. MathSciNetView ArticleMATHGoogle Scholar
 Diekmann O. Delay equations: functional, complex, and nonlinear analysis. Berlin: Springer; 1995. View ArticleMATHGoogle Scholar
 Hutt A, Buhry L. Study of gabaergic extrasynaptic tonic inhibition in single neurons and neural populations by traversing neural scales: application to propofolinduced anaesthesia. J Comput Neurosci. 2014;37(3):417–37. MathSciNetView ArticleGoogle Scholar
 Kirsch A. An introduction to the mathematical theory of inverse problems. Applied mathematical sciences. New York: Springer; 2011. View ArticleMATHGoogle Scholar
 Hutt A, Bestehorn M, Wennekers T. Pattern formation in intracortical neural fields. Netw Comput Neural Syst. 2003;14:351–68. View ArticleGoogle Scholar
 Wennekers T. Orientation tuning properties of simple cells in area V1 derived from an approximate analysis of nonlinear neural field models. Neural Comput. 2001;13:1721–47. View ArticleMATHGoogle Scholar
 Potthast R, Graben P. Inverse problems in neural field theory. SIAM J Appl Dyn Syst. 2009;8(4):1405–33. MathSciNetView ArticleMATHGoogle Scholar