Ill-Posed Point Neuron Models
- Bjørn Fredrik Nielsen^{1}Email author and
- John Wyller^{1}
https://doi.org/10.1186/s13408-016-0039-8
© Nielsen and Wyller 2016
Received: 17 November 2015
Accepted: 20 April 2016
Published: 30 April 2016
Abstract
We show that point-neuron models with a Heaviside firing rate function can be ill posed. More specifically, the initial-condition-to-solution map might become discontinuous in finite time. Consequently, if finite precision arithmetic is used, then it is virtually impossible to guarantee the accurate numerical solution of such models. If a smooth firing rate function is employed, then standard ODE theory implies that point-neuron models are well posed. Nevertheless, in the steep firing rate regime, the problem may become close to ill posed, and the error amplification, in finite time, can be very large. This observation is illuminated by numerical experiments. We conclude that, if a steep firing rate function is employed, then minor round-off errors can have a devastating effect on simulations, unless proper error-control schemes are used.
Keywords
1 Introduction
A simple example, presented in Sect. 4, shows that \(R_{\infty}\) can become discontinuous. Hence, the model is mathematically ill posed [4, 5] and round-off errors of any size can corrupt computations. We conclude that it is very difficult to produce reliable simulations with such models. Since all norms for finite dimensional spaces are equivalent, it is not possible to “circumvent” this problem by changing the involved topologies.
Our investigation is motivated by the fact that steep sigmoid functions, or even the Heaviside function, often are employed in mathematical/computational neuroscience; see e.g. [1, 6] and references therein. Other authors [7, 8] have also pointed out that severe challenges occur if \(\beta= \infty\), i.e. issues concerning how to define suitable function spaces and to prove existence of solutions. Nevertheless, as far as we know, results which explicitly discuss the ill-posed nature of (1)–(2) when \(\beta=\infty\), and how this property yields extra numerical challenges in the steep, but smooth, firing rate regime, has not previously been published.
Remark
We would like to point-out the following: Assume that an initial condition is close to an unstable equilibrium. Our results should not be interpreted as expressing the mundane fact that a perturbation of this initial condition, moving it to another region with completely different dynamical properties, may lead to large changes in the solution. In fact, we show that the error-amplification ratio can be huge, during small time intervals, even though the perturbation does not change which neurons are active. That is, the change in the initial condition is not such that it changes the qualitative behavior of the dynamical system for \(0< t \ll 1\)—only the quantitative properties are dramatically altered. This can happen in the steep firing rate regime.
2 Numerical Results
Let us first compute the error-amplification ratio (5) for some simple problems.
Example 1
Error-amplification ratio, with \(\pmb{T=0.1}\) , associated with Example 1
β | 1 | 25 | 50 | 75 |
---|---|---|---|---|
\(E(T;\beta)=\frac{| u(T;\beta) - \tilde{u}(T;\beta) |}{| u_{0} - \tilde {u}_{0} |}\) | 0.95 | 2.79 | 8.58 | 26.41 |
Example 2
3 Analysis
The purpose of this section is to present an analysis of the error-amplification ratio (5) and thereby explain the main features of our numerical results. Even though the Picard–Lindelöf theorem [9, 10] asserts that (1)–(2) has a unique solution \(\mathbf{u}(t)\), provided that \(\mathbf{q}(t)\) is continuous and that \(\beta< \infty\), it is virtually impossible to determine a simple expression for \(\mathbf {u}(t)\). On the other hand, if \(\mathbf{u}(t) \approx\mathbf {u}_{\theta }\) and \(\beta< \infty\), then we can linearize \(S_{\beta}\) to get an approximate model, which is much easier to work with.
3.1 Linearization
Remark
3.2 Preparations
The main point of this discussion is to show that there exist (smooth) source terms q and perturbations of the initial condition such that (14) holds, regardless how large \(\hat{T},\beta_{\max},\alpha> 0\) are. Also, the solutions of the linearized model will satisfy (16). For the sake of simple notation, let \(T=\min\{ \tilde{T},\hat{T} \}\) and \(\beta_{\min} = \max\{ \tilde{\beta}_{\min}, \hat{\beta}_{\min } \}\).
3.3 Linearization Error
3.4 Error-Amplification Ratio
We conclude that, during time intervals in which (14) holds, the linearized equations (9)–(10) yield a fair approximation of the point-neuron model (1)–(2). Hence, the analysis presented in this section, which provided an error-amplification ratio of order \(O(e^{\beta})\) for (9)–(10), explains our numerical results. More precisely, even though the error is bounded by \(2 \beta_{\max}^{-1-\alpha}\) during such time intervals, see (15), the error-amplification ratio can approximately be of order \(O(e^{\beta})\). This implies that minor perturbations, e.g. round-off errors, can corrupt computations. For example, in Fig. 4 an initial perturbation of size 10^{−5} is increased to an error of approximately \(0.04=4~\%\).
Remark
Assume that the \(\| \cdot\|_{\infty}\)-norm of the source term \(\mathbf {q}(t)\) is bounded. Then, since \(0< S_{\beta}[x] < 1\) for all \(x \in \mathbb {R}\), it follows from (1) that both \(\| \mathbf{u}'(t) \|_{\infty}\) and \(\| \tilde{\mathbf{u}}'(t) \| _{\infty}\) are bounded independently of the size of the steepness parameter β, at least when \(\mathbf{u}(t)\approx u_{\theta}\) and \(\tilde{\mathbf{u}}(t) \approx u_{\theta}\). Consequently, also the difference \(\| \mathbf{u}(T) - \tilde{\mathbf{u}}(T) \|_{\infty}\) is bounded independently of \(\beta> 0\). Our results therefore might appear to be somewhat counter-intuitive: But note that we have only argued that the error-amplification ratio (5) may, approximately, be of order \(O(e^{\beta})\). If β is large, this can cause severe numerical challenges.
The maximum error bound (15), valid when \(\mathbf {u}-\mathbf{u}_{\theta}\) and \(\tilde{\mathbf{u}}-\mathbf {u}_{\theta}\) satisfy (14), suggests that setting \(\beta= \infty\) might provide a solution to the issues discussed above. Unfortunately, as will be explained in the next section, this is not the case.
4 Ill Posed
We will now show that (1)–(2) can become truly ill posed, if a Heaviside firing rate function is employed. More specifically, the initial-condition-to-solution map, in finite time, can be discontinuous.
One may consider this issue from a more pragmatic point of view. Let \(v_{\Delta t}\) denote a numerical approximation of v. If a Heaviside firing rate function is employed, then \(H(v_{\Delta t}-u_{\theta})\) must be evaluated in some line of the simulation software. This is an unstable procedure because H has a jump discontinuity at 0, and round-off errors of any size can corrupt computations.
5 Conclusions and Discussion
Since \(R_{\infty}\) can become discontinuous, it is virtually impossible to guarantee the accurate numerical solution of point-neuron models which employ a Heaviside firing rate function: Any round-off errors can potentially corrupt simulations. Alternatively, one may stop the simulation as soon as the solution hits the jump discontinuity, i.e. the threshold value for firing.
We have also observed that models with a steep, but smooth, firing rate function can amplify errors to an extreme degree, which is typical for “almost ill-posed” problems. Consequently, reliable simulations can only be obtained if proper error-control schemes are invoked. How to design effective error-control methods, for models with a large steepness parameter β, is, as far as the authors know, still an open problem. Nevertheless, it seems plausible that suitable adaptive numerical schemes, where the time steps become smaller when the solution reaches regions in the vicinity of the threshold value for firing, might be capable of handling the numerical error amplification.
From a modeling perspective one might wonder: Should a voltage-based model of cortex be ill posed or “almost ill posed”? If so, then models employing a Heaviside firing rate function cannot be robustly solved with finite precision arithmetic and regularized approximations are numerically challenging [4, 5].
An easy solution to the issues raised in this paper, is to avoid steep firing rate functions. If β is fairly small, then standard ODE theory [9, 10] and textbook material about their numerical treatment can be used, provided that the source term \(\mathbf {q}(t)\) is continuous. Nevertheless, steep sigmoid functions are popular in computational neuroscience.
Declarations
Acknowledgements
This work was supported by The Research Council of Norway, project number 239070. The authors would like to thank the reviewers for a number of interesting comments, which significantly improved this paper.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Bressloff P. Spatiotemporal dynamics of continuum neural fields. J Phys A, Math Theor. 2012;45:033001. MathSciNetView ArticleMATHGoogle Scholar
- Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys. 1998;61:353–430. View ArticleGoogle Scholar
- Faugeras O, Veltz R, Grimbert F. Persistent neural states: stationary localized activity patterns in nonlinear continuous n-population, q-dimensional neural networks. Neural Comput. 2009;21:147–87. MathSciNetView ArticleMATHGoogle Scholar
- Engl HW, Hanke M, Neubauer A. Regularization of inverse problems. Dordrecht: Kluwer Academic; 1996. View ArticleMATHGoogle Scholar
- Well-posed problem. Wikipedia. https://en.wikipedia.org/wiki/Well-posed_problem (2016).
- Coombes S. Waves, bumps, and patterns in neural field theories. Biol Cybern. 2005;93:91–108. MathSciNetView ArticleMATHGoogle Scholar
- Veltz R, Faugeras O. Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst. 2010;9:954–98. MathSciNetView ArticleMATHGoogle Scholar
- Potthast R, beim Graben P. Existence and properties of solutions for neural field equations. Math Methods Appl Sci. 2010;33:935–49. MathSciNetMATHGoogle Scholar
- Hirsch MW, Smale S. Differential equations, dynamical systems and linear algebra. New York: Academic Press; 1974. MATHGoogle Scholar
- Picard–Lindelöf theorem. Wikipedia. https://en.wikipedia.org/wiki/Picard (2016).