 Research
 Open Access
Regularization of IllPosed Point Neuron Models
 Bjørn Fredrik Nielsen^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s1340801700491
© The Author(s) 2017
 Received: 1 March 2017
 Accepted: 19 June 2017
 Published: 14 July 2017
Abstract
Point neuron models with a Heaviside firing rate function can be illposed. That is, the initialconditiontosolution map might become discontinuous in finite time. If a Lipschitz continuous but steep firing rate function is employed, then standard ODE theory implies that such models are wellposed and can thus, approximately, be solved with finite precision arithmetic. We investigate whether the solution of this wellposed model converges to a solution of the illposed limit problem as the steepness parameter of the firing rate function tends to infinity. Our argument employs the Arzelà–Ascoli theorem and also yields the existence of a solution of the limit problem. However, we only obtain convergence of a subsequence of the regularized solutions. This is consistent with the fact that models with a Heaviside firing rate function can have several solutions, as we show. Our analysis assumes that the vectorvalued limit function v, provided by the Arzelà–Ascoli theorem, is threshold simple: That is, the set containing the times when one or more of the component functions of v equal the threshold value for firing, has zero Lebesgue measure. If this assumption does not hold, we argue that the regularized solutions may not converge to a solution of the limit problem with a Heaviside firing function.
Keywords
 Point neuron models
 Illposed
 Regularization
 Existence
1 Introduction
By employing electrophysiological properties one can argue that it is appropriate to use a steep sigmoid firing rate function \(S_{\beta}\). But due to mathematical convenience the Heaviside function is also often employed, see, e.g., [5–8]. Unfortunately, when \(\beta= \infty\) the initialconditiontosolution map for (1) can become discontinuous in finite time [9]. Such models are thus virtually impossible to solve with finite precision arithmetic [10, 11]. Also, in the steep but Lipschitz continuous firing rate regime, the error amplification can be extreme, even though a minor perturbation of the initial condition does not change which neurons that fire. It is important to note that this illposed nature of the model is a fundamentally different mathematical property from the possible existence of unstable equilibria, which typically also occur if a firing rate function with moderate steepness is used. See [9] for further details.
It is thus sufficient that this set is finite or countable; see, e.g., [13]. Furthermore, in Sect. 7 we argue that, if v does not satisfy this threshold property, then this function will not necessarily solve the limit problem.
According to the Picard–Lindelöf theorem [15–17], (1) has a unique solution, provided that \(\beta< \infty\), and that the assumptions presented in the next section hold. In Sect. 5 we show that this uniqueness feature is not necessarily inherited by the limit problem obtained by employing a Heaviside firing rate function. It actually turns out that a different subsequence of \(\{ \mathbf{u}_{\beta} \}\) can converge to different solutions of (1) with \(S_{\beta}=S_{\infty}\). This is explained in Sect. 6, which also contains a result addressing the convergence of the entire sequence \(\{ \mathbf{u}_{\beta} \}\).
The limit process \(\beta\rightarrow\infty\), using different techniques, is studied in [18, 19] for the stationary solutions of neural field equations. It has also been observed [20] for the Wilson–Cowan model that this transition is a subtle matter: Using a steep sigmoid firing rate function instead of the Heaviside mapping can lead to significant changes in a Hopf bifurcation point. ‘the limiting value of the Hopf depends on the choice of the firing rate function’.
If one uses a Heaviside firing rate function in (1) the righthandsides of these ODEs become discontinuous. A rather general theory for such equations has been developed [21]. In this theory the system of ODEs is replaced by a differential inclusion, in which the righthand side of the ODE system is substituted by a setvalued function. The construction of this setvalued operator can be accomplished by invoking Filippov regularization/convexification. But this methodology serves a different purpose than the smoothing processes considered in this paper. More specifically, it makes it possible to prove that generalized solutions (Filippov solutions) to the problem exist but do not provide a family of wellposed equations suitable for numerical solution.
Smoothening techniques for discontinuous vector fields, which are similar to the regularization method considered in this paper, have been proposed and analyzed for rather general phase spaces [22–24]. Nevertheless, these studies consider qualitative properties of large classes of problems, whereas we focus on a quantitative analysis of a very special system of ODEs.
2 Assumptions
Concerning the sequence \(\{ S_{\beta} \}\) of finite steepness firing rate functions, we make the following assumption.
Assumption A
 (a)
\(S_{\beta}\), \(\beta\in \mathbb {N}\), is Lipschitz continuous,
 (b)
\(0 \leq S_{\beta}(x) \leq1\), \(x \in \mathbb {R}, \quad \beta\in \mathbb {N}\),
 (c)for every pair of positive numbers \((\epsilon, \delta)\) there exists \(Q \in \mathbb {N}\) such that$$\begin{aligned} \bigl\vert S_{\beta}(x) \bigr\vert &< \epsilon \quad \mbox{for } x < \delta\mbox{ and } \beta> Q, \end{aligned}$$(6)$$\begin{aligned} \bigl\vert 1S_{\beta}(x) \bigr\vert &< \epsilon \quad \mbox{for } x > \delta\mbox{ and } \beta> Q. \end{aligned}$$(7)
We will consider a slightly more general version of the model than (3). More specifically, we allow the source term to depend on the steepness parameter, \(\mathbf{q}=\mathbf{q}_{\beta}\), but in such a way that the following assumption holds.
Assumption B
Allowing the external drive to depend on the steepness parameter makes it easier to construct illuminating examples. However, we note that our theory will also hold for the simpler case when q does not change as β increases.
In this paper we will assume that Assumptions A and B are satisfied.
3 Uniformly Bounded and Equicontinuous
3.1 Threshold Terminology
As we will see in subsequent sections it depends on v’s threshold properties whether we can prove that v actually solves the limit problem with a Heaviside firing rate function. The following concepts turn out to be useful.
Definition
Threshold simple
Definition
Extra threshold simple
In words, z is extra threshold simple if there is a finite number of threshold crossings on the time interval \([0,T]\).
4 The Limit of the Subsequence
4.1 Preparations
The uniform convergence of \(\{ \mathbf{u}_{\beta_{k}} \}\) to v implies that the lefthandside and the first integral on the righthand side of (16) converge to \(\boldsymbol{\tau} \mathbf{v}(t)  \boldsymbol{\tau} \mathbf {u}_{\mathrm{init}}\) and \( \int_{0}^{t} \mathbf{v}(s)\,ds\), respectively, as \(k \rightarrow\infty\). Also, due to assumption (11), the third integral on the righthand side does not require any extra attention. We will thus focus on the second integral on the righthand side of (16).
Lemma 4.1
Proof
4.2 Convergence of the Integral
Lemma 4.2
Proof
4.3 Limit Problem
The existence of a solution matter for point neuron models with a Heaviside firing rate function is summarized in the following theorem.
Theorem 4.3
If the limit v in (13) is threshold simple, then v solves (27). In the case that v is extra threshold simple v also satisfies (28).
In [25] the existence issue for neural field equations with a Heaviside activation function is studied but the analysis is different because a continuum model is considered. We would also like to mention that Theorem 4.3 cannot be regarded as a simple consequence of Carathéodory’s existence theorem [21, 26, 27] because the righthandside of (28) is discontinuous with respect to v.
5 Uniqueness
If \(\beta< \infty\), then standard ODE theory [15–17] implies that (3) has a unique solution. Unfortunately, as will be demonstrated below, this desirable property is not necessarily inherited by the infinite steepness limit problem.
We will first explain why the uniqueness question is a subtle issue for point neuron models with a Heaviside firing rate function. Thereafter, additional requirements are introduced which ensure the uniqueness of an extra threshold simple solution.
5.1 Example: Several Solutions
We conclude that models with a Heaviside firing rate function can have several solutions – such problems can thus become illposed. (In [9] we showed that the initialconditiontosolution map is not necessarily continuous for such problems and that the error amplification ratio can become very large in the steep but Lipschitz continuous firing rate regime.) Note that switching to the integral form (27) will not resolve the lack of uniqueness issue for the toy example considered in this subsection.

If we define \(S_{\infty}(0)=1/2\), then neither \(v_{1}\) nor \(v_{2}\) satisfies the ODE in (29) for \(t=0\). (In the case \(\omega=2 u_{\theta}\), \(v_{3}\) satisfies the ODE in (29) for \(t=0\).)

If we define \(S_{\infty}(0)=1\), then \(v_{1}\), but not \(v_{2}\), satisfies the ODE in (29) also for \(t=0\).

If we define \(S_{\infty}(0)=0\), then \(v_{2}\), but not \(v_{1}\), satisfies the ODE in (29) also for \(t=0\).
5.2 Enforcing Uniqueness
Definition 1
Right smooth
A vectorvalued function \(\mathbf{z}:[0,T] \rightarrow \mathbb {R}^{N}\) is right smooth if \(\mathbf{z}'\) is continuous from the right for all \(t \in[0,T)\).
Theorem 5.1
The initial value problem (33) can have at most one solution which is both extra threshold simple and right smooth.
Proof
Since \(\mathbf{v}(a_{l+1}) = \tilde{\mathbf{v}} (a_{l+1})\) we can repeat the argument on the next interval \([a_{l+1},a_{l+2}]\). It follows by induction that \(\mathbf{v}(t) = \tilde{\mathbf{v}} (t), t \in[0,T]\). □
We would like to comment the findings presented in the bulletpoints at the end of Sect. 5.1 in view of Theorem 5.1: In order to enforce uniqueness for the solution of (29) we can require that the ODE in (29) also should be satisfied for \(t=0\). Nevertheless, this might force us to define \(S_{\infty}(0) \neq \frac{1}{2}\), which differs from the standard definition of the Heaviside function H.
6 Convergence of the Entire Sequence
We have seen that point neuron models with a Heaviside firing rate function can have several solutions. One therefore might wonder if different subsequences of \(\{ \mathbf{u}_{\beta} \}\) can converge to different solutions of the limit problem. In this section we present an example which shows that this can happen, even though the involved sigmoid functions satisfy Assumption A.
6.1 Example: Different Subsequences Can Converge to Different Solutions

\(u'_{\beta}(0) > c\) if β is even,

\(u'_{\beta}(0) < c\) if β is odd,
We would like to mention that we have not been able to construct an example of this kind for Lipschitz continuous firing rate functions which converge pointwise to the Heaviside function also for \(x=0\).
6.2 Entire Sequence
We have seen that almost everywhere convergence of the sequence of firing rate functions to the Heaviside limit is not sufficient to guarantee that the entire sequence \(\{ u_{\beta} \}\) converges to the same solution of the limit problem. Nevertheless, one has the following result.
Theorem 6.1
Let v be the limit function in (13). If the limit of every convergent subsequence of \(\{ \mathbf{u}_{\beta} \}\) is extra threshold simple, right smooth and satisfies (33), then the entire sequence \(\{ \mathbf{u}_{\beta} \}\) converges uniformly to v.
Proof
On the other hand, both v and \(\tilde{\mathbf{v}}\) are limits of subsequences of \(\{ \mathbf{u}_{\beta} \}\) and are by assumption extra threshold simple, right smooth, and they satisfy (33). Hence, Theorem 5.1 implies that \(\tilde{\mathbf{v}} = \mathbf{v}\), which contradicts (43). We conclude that the entire sequence \(\{ \mathbf{u}_{\beta} \}\) must converge uniformly to v. □
7 Example: Threshold Advanced Limits
We will now show that threshold advanced limits, i.e. limits which are not threshold simple, may possess some peculiar properties. More precisely, such limits can potentially occur in (13). They do not necessarily satisfy the limit problem obtained by using a Heaviside firing rate function.
Due to the properties of the firing rate function (41) the source term \(q_{\beta}\) in (44) becomes discontinuous. This can be avoided by instead using the smooth version (8)–(9) but then the analysis of this example becomes much more involved.
8 Discussion and Conclusions
 (a)
Attempting to solve the illposed equation with symbolic computations.
 (b)
Regularize the problem.
Our results show that the sequence \(\{ \mathbf{u}_{\beta} \}\) of regularized solutions will have at least one convergent subsequence. The limit, v, of this subsequence will satisfy the integral/Volterra form (27) of the limit problem, provided that the set \(Z(\mathbf{v})\), see (15), has zero Lebesgue measure. Unfortunately, it seems to be very difficult to impose restrictions which would guarantee that v obeys this threshold property, which we refer to as threshold simple. Also, the example presented in Sect. 7 shows that, if the limit v is not threshold simple, then this function may not solve the associated equation with a Heaviside firing rate function.
One could propose to overcome the difficulties arising when \(\beta= \infty\) by always working with finite slope firing rate functions. This would potentially yield a rather robust approach, provided that the entire sequence \(\{ \mathbf{u}_{\beta} \}\) converges, because increasing a large β would still guarantee that \(\mathbf{u}_{\beta}\) is close to the unique limit v. However, the fact that different convergent subsequences of \(\{ \mathbf{u}_{\beta} \}\) can converge to different solutions of the limit problem, as discussed in Sect. 6, suggests that this approach must be applied with great care. In addition, the error amplification in the steep firing rate regime can become extreme [9] and the accurate numerical solution of such models is thus challenging.
What are the practical consequences of our findings? As long as there does not exist very reliable biological information about the size of the steepness parameter β and the shape of the firing rate function \(S_{\beta}\), it seems that we have to be content with simulating with various \(\beta< \infty\). If one observes that \(\mathbf{u}_{\beta}\) approaches a threshold advanced limit, as β increases, or that the entire sequence does not converge, the alarm bell should ring. All simulations with large β must use error control methods which guarantee the accuracy of the numerical solution—we must keep in mind that we are trying to solve an almost illposed problem.
In neural field equations one employs a continuous variable, e.g., \(x \in \mathbb {R}\), instead of a discrete index \(i \in\{1,2, \ldots, N \}\). The sum in (1) is replaced by an integral; see [1, 2, 6]. For each time instance \(t \in[0,T]\) one therefore does not get a vector \(\mathbf{u}_{\beta}(t) \in \mathbb {R}^{N}\), as for the point neural models analyzed in this paper, but a function \(\mathbf {u}_{\beta}(x,t)\), \(x \in \mathbb {R}\). That is, in neural field equations the object associated with each fixed \(t \in[0,T]\) belongs to an infinite dimensional space. It is often a subtle task to generalize concepts and proofs from a finite to an infinite dimensional setting: It is thus an open problem whether the techniques and results presented in this paper can be adapted to neural field models.
Declarations
Acknowledgements
This work was supported by the The Research Council of Norway, project number 239070. The author would like to thank the reviewers and Prof. Wyller for a number of interesting comments, which significantly improved this paper.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Bressloff P. Spatiotemporal dynamics of continuum neural fields. J Phys A, Math Theor. 2012;45:033001. MathSciNetView ArticleMATHGoogle Scholar
 Ermentrout GB. Neural networks as spatiotemporal patternforming systems. Rep Prog Phys. 1998;61:353–430. View ArticleGoogle Scholar
 Faugeras O, Veltz R, Grimbert F. Persistent neural states: Stationary localized activity patterns in nonlinear continuous npopulation, qdimensional neural networks. Neural Comput. 2009;21:147–87. MathSciNetView ArticleMATHGoogle Scholar
 Hopfield JJ. Neurons with graded response have collective computational properties like those of twostate neurons. Proc Natl Acad Sci USA. 1984;81:3088–92. View ArticleGoogle Scholar
 Amari S. Dynamics of pattern formation in lateralinhibition type neural fields. Biol Cybern. 1977;27:77–87. MathSciNetView ArticleMATHGoogle Scholar
 Coombes S. Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005;93:91–108. MathSciNetView ArticleMATHGoogle Scholar
 Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: I. Traveling fronts and pulses. SIAM J Appl Math. 2001;62:206–25. MathSciNetView ArticleMATHGoogle Scholar
 Pinto DJ, Ermentrout GB. Spatially structured activity in synaptically coupled neuronal networks: II. Lateral inhibition and standing pulses. SIAM J Appl Math. 2001;62:226–43. MathSciNetView ArticleMATHGoogle Scholar
 Nielsen BF, Wyller J. Illposed point neuron models. J Math Neurosci. 2016;6:7. MathSciNetView ArticleMATHGoogle Scholar
 Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Dordrecht: Kluwer Academic; 1996. View ArticleMATHGoogle Scholar
 Wikipedia. Wellposed problem. https://en.wikipedia.org/wiki/Wellposed_problem (2017).
 Griffel DH. Applied functional analysis. Chichester: Ellis Horwood; 1981. MATHGoogle Scholar
 Royden HL. Real analysis. 3rd ed. New York: Macmillan Co.; 1989. MATHGoogle Scholar
 Wikipedia. Arzelà–Ascoli theorem. https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem (2017).
 Hirsch MW, Smale S. Differential equations, dynamical systems and linear algebra. San Diego: Academic Press; 1974. MATHGoogle Scholar
 Teschl G. Ordinary differential equations and dynamical systems. Providence: Am. Math. Soc.; 2012. View ArticleMATHGoogle Scholar
 Wikipedia. Picard–Lindelöf theorem. https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem (2017).
 Oleynik A, Ponosov A, Wyller J. On the properties of nonlinear nonlocal operators arising in neural field models. J Math Anal Appl. 2013;398:335–51. MathSciNetView ArticleMATHGoogle Scholar
 Oleynik A, Ponosov A, Kostrykin V, Sobolev AV. Spatially localized solutions of the Hammerstein equation with sigmoid type of nonlinearity. J Differ Equ. 2016;261(10):5844–74. MathSciNetView ArticleMATHGoogle Scholar
 Harris J, Ermentrout GB. Bifurcations in the Wilson–Cowan equations with nonsmooth firing rate. SIAM J Appl Dyn Syst. 2015;14:43–72. MathSciNetView ArticleMATHGoogle Scholar
 Filippov AF. Differential equations with discontinuous righthand sides. Dordrecht: Kluwer Academic; 1988. View ArticleGoogle Scholar
 Llibre J, da Silva PR, Teixeira MA. Regularization of discontinuous vector fields on \(\mathbb {R}^{3}\) via singular perturbation. J Dyn Differ Equ. 2007;19:309–31. MathSciNetView ArticleMATHGoogle Scholar
 Sotomayor J, Teixeira MA. Regularization of discontinuous vector fields. In: Equadiff 95 proceedings of the international conference on differential equations. Singapore: World Scientific; 1996. p. 207–23. Google Scholar
 Teixeira MA, da Silva PR. Regularization and singular perturbation techniques for nonsmooth systems. Physica D. 2012;241:1948–55. MathSciNetView ArticleGoogle Scholar
 Potthast R, Beim Graben P. Existence and properties of solutions for neural field equations. Math Methods Appl Sci. 2010;33:935–49. MathSciNetMATHGoogle Scholar
 Coddington EA, Levinson N. Theory of ordinary differential equations. New York: McGrawHill; 1955. MATHGoogle Scholar
 Wikipedia. Carathéodory’s existence theorem. https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_existence_theorem (2017).