Skip to main content

Analysis of a hyperbolic geometric model for visual texture perception

Abstract

We study the neural field equations introduced by Chossat and Faugeras to model the representation and the processing of image edges and textures in the hypercolumns of the cortical area V1. The key entity, the structure tensor, intrinsically lives in a non-Euclidean, in effect hyperbolic, space. Its spatio-temporal behaviour is governed by nonlinear integro-differential equations defined on the Poincaré disc model of the two-dimensional hyperbolic space. Using methods from the theory of functional analysis we show the existence and uniqueness of a solution of these equations. In the case of stationary, that is, time independent, solutions we perform a stability analysis which yields important results on their behavior. We also present an original study, based on non-Euclidean, hyperbolic, analysis, of a spatially localised bump solution in a limiting case. We illustrate our theoretical results with numerical simulations.

Mathematics Subject Classification:30F45, 33C05, 34A12, 34D20, 34D23, 34G20, 37M05, 43A85, 44A35, 45G10, 51M10, 92B20, 92C20.

1 Introduction

The selectivity of the responses of individual neurons to external features is often the basis of neuronal representations of the external world. For example, neurons in the primary visual cortex (V1) respond preferentially to visual stimuli that have a specific orientation [13], spatial frequency [4], velocity and direction of motion [5], color [6]. A local network in the primary visual cortex, roughly 1 mm2 of cortical surface, is assumed to consist of subgroups of inhibitory and excitatory neurons each of which is tuned to a particular feature of an external stimulus. These subgroups are the so-called Hubel and Wiesel hypercolumns of V1. We have introduced in [7] a new approach to model the processing of image edges and textures in the hypercolumns of area V1 that is based on a nonlinear representation of the image first order derivatives called the structure tensor [8, 9]. We suggested that this structure tensor was represented by neuronal populations in the hypercolumns of V1. We also suggested that the time evolution of this representation was governed by equations similar to those proposed by Wilson and Cowan [10]. The question of whether some populations of neurons in V1 can represent the structure tensor is discussed in [7] but cannot be answered in a definite manner. Nevertheless, we hope that the predictions of the theory we are developing will help deciding on this issue.

Our present investigations were motivated by the work of Bressloff, Cowan, Golubitsky, Thomas and Wiener [11, 12] on the spontaneous occurence of hallucinatory patterns under the influence of psychotropic drugs, and its extension to the structure tensor model. A further motivation was the following studies of Bressloff and Cowan [4, 13, 14] where they study a spatial extension of the ring model of orientation of Ben-Yishai [1] and Hansel, Sompolinsky [2]. To achieve this goal, we first have to better understand the local model, that is the model of a ‘texture’ hypercolumn isolated from its neighbours.

The aim of this paper is to present a rigorous mathematical framework for the modeling of the representation of the structure tensor by neuronal populations in V1. We would also like to point out that the mathematical analysis we are developing here, is general and could be applied to other integro-differential equations defined on the set of structure tensors, so that even if the structure tensor were found to be not represented in a hypercolumn of V1, our framework would still be relevant. We then concentrate on the occurence of localized states, also called bumps. This is in contrast to the work of [7] and [15] where ‘spatially’ periodic solutions were considered. The structure of this paper is as follows. In Section 2 we introduce the structure tensor model and the corresponding equations. We also link our model to the ring model of orientations. In Section 3 we use classical tools of evolution equations in functional spaces to analyse the problem of the existence and uniqueness of the solutions of our equations. In Section 4 we study stationary solutions which are very important for the dynamics of the equation by analysing a nonlinear convolution operator and making use of the Haar measure of our feature space. In Section 5, we push further the study of stationary solutions in a special case and we present a technical analysis involving hypergeometric functions of what we call a hyperbolic radially symmetric stationary-pulse in the high gain limit. Finally, in Section 6, we present some numerical simulations of the solutions to verify the findings of the theoretical results.

2 The model

By definition, the structure tensor is based on the spatial derivatives of an image in a small area that can be thought of as part of a receptive field. These spatial derivatives are then summed nonlinearly over the receptive field. Let I(x,y) denote the original image intensity function, where x and y are two spatial coordinates. Let I σ 1 denote the scale-space representation of I obtained by convolution with the Gaussian kernel g σ (x,y)= 1 2 π σ 2 e ( x 2 + y 2 ) / ( 2 σ 2 ) :

I σ 1 =I g σ 1 .

The gradient I σ 1 is a two-dimensional vector of coordinates I x σ 1 I y σ 1 which emphasizes image edges. One then forms the 2×2 symmetric matrix of rank one T 0 = I σ 1 ( I σ 1 ) T , where T indicates the transpose of a vector. The set of 2×2 symmetric positive semidefinite matrices of rank one will be noted S + (1,2) throughout the paper (see [16] for a complete study of the set S + (p,n) of n×n symmetric positive semidefinite matrices of fixed-rank p<n). By convolving T 0 componentwise with a Gaussian g σ 2 we finally form the tensor structure as the symmetric matrix:

T= T 0 g σ 2 = ( ( I x σ 1 ) 2 σ 2 I x σ 1 I y σ 1 σ 2 I x σ 1 I y σ 1 σ 2 ( I y σ 1 ) 2 σ 2 ) ,

where we have set for example:

( I x σ 1 ) 2 σ 2 = ( I x σ 1 ) 2 g σ 2 .

Since the computation of derivatives usually involves a stage of scale-space smoothing, the definition of the structure tensor requires two scale parameters. The first one, defined by σ 1 , is a local scale for smoothing prior to the computation of image derivatives. The structure tensor is insensitive to noise and details at scales smaller than σ 1 . The second one, defined by σ 2 , is an integration scale for accumulating the nonlinear operations on the derivatives into an integrated image descriptor. It is related to the characteristic size of the texture to be represented, and to the size of the receptive fields of the neurons that may represent the structure tensor.

By construction, T is symmetric and non negative as det(T)0 by the inequality of Cauchy-Schwarz, then it has two orthonormal eigenvectors e 1 , e 2 and two non negative corresponding eigenvalues λ 1 and λ 2 which we can always assume to be such that λ 1 λ 2 0. Furthermore the spatial averaging distributes the information of the image over a neighborhood, and therefore the two eigenvalues are always positive. Thus, the set of the structure tensors lives in the set of 2×2 symmetric positive definite matrices, noted SPD(2,R) throughout the paper. The distribution of these eigenvalues in the ( λ 1 , λ 2 ) plane reflects the local organization of the image intensity variations. Indeed, each structure tensor can be written as the linear combination:

T = λ 1 e 1 e 1 T + λ 2 e 2 e 2 T = ( λ 1 λ 2 ) e 1 e 1 T + λ 2 ( e 1 e 1 T + e 2 e 2 T ) = ( λ 1 λ 2 ) e 1 e 1 T + λ 2 I 2 ,
(1)

where I 2 is the identity matrix and e 1 e 1 T S + (1,2). Some easy interpretations can be made for simple examples: constant areas are characterized by λ 1 = λ 2 0, straight edges are such that λ 1 λ 2 0, their orientation being that of e 2 , corners yield λ 1 λ 2 0. The coherency c of the local image is measured by the ratio c= λ 1 λ 2 λ 1 + λ 2 , large coherency reveals anisotropy in the texture.

We assume that a hypercolumn of V1 can represent the structure tensor in the receptive field of its neurons as the average membrane potential values of some of its membrane populations. Let T be a structure tensor. The time evolution of the average potential V(T,t) for a given column is governed by the following neural mass equation adapted from [7] where we allow the connectivity function W to depend upon the time variable t and we integrate over the set of 2×2 symmetric definite-positive matrices:

{ t V ( T , t ) = α V ( T , t ) + SPD ( 2 ) W ( T , T , t ) S ( V ( T , t ) ) d T t V ( T , t ) = + I ext ( T , t ) t > 0 , V ( T , 0 ) = V 0 ( T ) .
(2)

The nonlinearity S is a sigmoidal function which may be expressed as:

S(x)= 1 1 + e μ x ,

where μ describes the stiffness of the sigmoid. I ext is an external input.

The set SPD(2) can be seen as a foliated manifold by way of the set of special symmetric positive definite matrices SSPD(2)=SPD(2)SL(2,R). Indeed, we have: SPD(2) = hom SSPD(2)× R + . Furthermore, SSPD(2) = isom D, where D is the Poincaré Disk, see, for example, [7]. As a consequence we use the following foliation of SPD(2):SPD(2) = hom D× R + , which allows us to write for all TSPD(2)T=(z,Δ) with (z,Δ)D× R + . Tz and Δ are related by the relation det(T)= Δ 2 and the fact that z is the representation in D of T ˜ SSPD(2) with T=Δ T ˜ .

It is well-known [17] that D (and hence SSPD(2)) is a two-dimensional Riemannian space of constant sectional curvature equal to −1 for the distance noted d 2 defined by

d 2 (z, z )=arctanh | z z | | 1 z ¯ z | .

The isometries of D, that are the transformations that preserve the distance d 2 are the elements of unitary group U(1,1). In Appendix A we describe the basic structure of this group. It follows, for example, [7, 18], that SDP(2) is a three-dimensional Riemannian space of constant sectional curvature equal to −1 for the distance noted d 0 defined by

d 0 (T, T )= 2 ( log Δ log Δ ) 2 + d 2 2 ( z , z ) .

As shown in Proposition B.0.1 of Appendix B it is possible to express the volume element dT in ( z 1 , z 2 ,Δ) coordinates with z= z 1 +i z 2 :

dT=8 2 d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .

We note dm(z)= d z 1 d z 2 ( 1 | z | 2 ) 2 and equation (2) can be written in (z,Δ) coordinates:

t V ( z , Δ , t ) = α V ( z , Δ , t ) + 8 2 0 + D W ( z , Δ , z , Δ , t ) S ( V ( z , Δ , t ) ) d Δ Δ dm ( z ) + I ext ( z , Δ , t ) .

We get rid of the constant 8 2 by redefining W as 8 2 W.

{ t V ( z , Δ , t ) = α V ( z , Δ , t ) t V ( z , Δ , t ) = + 0 + D W ( z , Δ , z , Δ , t ) S ( V ( z , Δ , t ) ) d Δ Δ dm ( z ) t V ( z , Δ , t ) = + I ext ( z , Δ , t ) t > 0 , V ( z , Δ , 0 ) = V 0 ( z , Δ ) .
(3)

In [7], we have assumed that the representation of the local image orientations and textures is richer than, and contains, the local image orientations model which is conceptually equivalent to the direction of the local image intensity gradient. The richness of the structure tensor model has been expounded in [7]. The embedding of the ring model of orientation in the structure tensor model can be explained by the intrinsic relation that exists between the two sets of matrices SPD(2,R) and S + (1,2). First of all, when σ 2 goes to zero, that is when the characteristic size of the structure becomes very small, we have T 0 g σ 2 T 0 , which means that the tensor TSPD(2,R) degenerates to a tensor T 0 S + (1,2), which can be interpreted as the loss of one dimension. We can write each T 0 S + (1,2) as T 0 =x x T = r 2 u u T , where u= ( cos θ , sin θ ) T and (r,θ) is the polar representation of x. Since, x and −x correspond to the same T 0 θ is equated to θ+kπkZ. Thus S + (1,2)= R + × P 1 , where P 1 is the real projective space of dimension 1 (lines of R 2 ). Then the integration scale σ 2 , at which the averages of the estimates of the image derivatives are computed, is the link between the classical representation of the local image orientations by the gradient and the representation of the local image textures by the structure tensor. It is also possible to highlight this explanation by coming back to the interpretation of straight edges of the previous paragraph. When λ 1 λ 2 0 then T( λ 1 λ 2 ) e 1 e 1 T S + (1,2) and the orientation is that of e 2 . We denote by P the projection of a 2×2 symmetric definite positive matrix on the set S + (1,2) defined by:

P:{ SPD ( 2 , R ) S + ( 1 , 2 ) , T τ = ( λ 1 λ 2 ) e 1 e 1 T ,

where T is as in equation (1). We can introduce a metric on the set S + (1,2) which is derived from a well-chosen Riemannian quotient geometry (see [16]). The resulting Riemannian space has strong geometrical properties: it is geodesically complete and the metric is invariant with respect to all transformations that preserve angles (orthogonal transformations, scalings and pseudoinversions). Related to the decomposition S + (1,2)= R + × P 1 , a metric on the space S + (1,2) is given by:

d s 2 =2 ( d r r ) 2 +d θ 2 .

The space S + (1,2) endowed with this metric is a Riemannian manifold (see [16]). Finally, the distance associated to this metric is given by:

d S + ( 1 , 2 ) 2 ( τ 1 , τ 2 )=2 log 2 ( r 1 r 2 ) + | θ 1 θ 2 | 2 ,

where τ 1 = x 1 T x 1 τ 2 = x 2 T x 2 and ( r i , θ i ) denotes the polar coordinates of x i for i=1,2. The volume element in (r,θ) coordinates is:

dτ= d r r d θ π ,

where we normalize to 1 the volume element for the θ coordinate.

Let now τ=P(T) be a symmetric positive semidefinite matrix. The average potential V(τ,t) of the column has its time evolution that is governed by the following neural mass equation which is just a projection of equation (2) on the subspace S + (1,2):

t V ( τ , t ) = α V ( τ , t ) + S + ( 1 , 2 ) W ( τ , τ , t ) S ( V ( τ , t ) ) d τ + I ext ( τ , t ) t > 0 .
(4)

In (r,θ) coordinates, (4) is rewritten as:

t V ( r , θ , t ) = α V ( r , θ , t ) + 0 + 0 π W ( r , θ , r , θ , t ) S ( V ( r , θ , t ) ) d θ π d r r + I ext ( r , θ , t ) .

This equation is richer than the ring model of orientation as it contains an additional information on the contrast of the image in the orthogonal direction of the prefered orientation. If one wants to recover the ring model of orientation tuning in the visual cortex as it has been presented and studied by [1, 2, 19], it is sufficient i) to assume that the connectivity function is time-independent and has a convolutional form:

W(τ, τ ,t)=w ( d S + ( 1 , 2 ) ( τ , τ ) ) =w ( 2 log 2 ( r r ) + | θ θ | 2 ) ,

and ii) to look at semi-homogeneous solutions of equation (4), that is, solutions which do not depend upon the variable r. We finally obtain:

t V(θ,t)=αV(θ,t)+ 0 π w sh (θ θ )S ( V ( θ , t ) ) d θ π + I ext (θ,t),
(5)

where:

w sh (θ)= 0 + w ( 2 log 2 ( r ) + θ 2 ) d r r .

It follows from the above discussion that the structure tensor contains, at a given scale, more information than the local image intensity gradient at the same scale and that it is possible to recover the ring model of orientations from the structure tensor model.

The aim of the following sections is to establish that (3) is well-defined and to give necessary and sufficient conditions on the different parameters in order to prove some results on the existence and uniqueness of a solution of (3).

3 The existence and uniqueness of a solution

In this section we provide theoretical and general results of existence and uniqueness of a solution of (2). In the first subsection (Section 3.1) we study the simpler case of the homogeneous solutions of (2), that is, of the solutions that are independent of the tensor variable T. This simplified model allows us to introduce some notations for the general case and to establish the useful Lemma 3.1.1. We then prove in Section 3.2 the main result of this section, that is the existence and uniqueness of a solution of (2). Finally we develop the useful case of the semi-homogeneous solutions of (2), that is, of solutions that depend on the tensor variable but only through its z coordinate in D.

3.1 Homogeneous solutions

A homogeneous solution to (2) is a solution V that does not depend upon the tensor variable T for a given homogenous input I(t) and a constant initial condition V 0 . In (z,Δ) coordinates, a homogeneous solution of (3) is defined by:

V ˙ (t)=αV(t)+ W ¯ (z,Δ,t)S ( V ( t ) ) + I ext (t),

where:

W ¯ (z,Δ,t) = def 0 + D W(z,Δ, z , Δ ,t) d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .
(6)

Hence necessary conditions for the existence of a homogeneous solution are that:

  • the double integral (6) is convergent,

  • W ¯ (z,Δ,t)

    does not depend upon the variable (z,Δ). In that case, we write W ¯ (t) instead of W ¯ (z,Δ,t).

In the special case where W(z,Δ, z , Δ ,t) is a function of only the distance d 0 between (z,Δ) and ( z , Δ ):

W(z,Δ, z , Δ ,t)w ( 2 ( log Δ log Δ ) 2 + d 2 2 ( z , z ) , t )

the second condition is automatically satisfied. The proof of this fact is given in Lemma D.0.2 of Appendix D. To summarize, the homogeneous solutions satisfy the differential equation:

{ V ˙ ( t ) = α V ( t ) + W ¯ ( t ) S ( V ( t ) ) + I ext ( t ) , t > 0 , V ( 0 ) = V 0 .
(7)

3.1.1 A first existence and uniqueness result

Equation (3) defines a Cauchy’s problem and we have the following theorem.

Theorem 3.1.1 If the external input I ext (t)and the connectivity function W ¯ (t)are continuous on some closed interval J containing 0, then for all V 0 inR, there exists a unique solution of (7) defined on a subinterval J 0 of J containing 0 such thatV(0)= V 0 .

Proof It is a direct application of Cauchy’s theorem on differential equations. We consider the mapping f:J×RR defined by:

f(t,x)=αx+ W ¯ (t)S(x)+ I ext (t).

It is clear that f is continuous from J×R to R. We have for all x,yR and tJ:

| f ( t , x ) f ( t , y ) | α|xy|+ | W ¯ ( t ) | S m |xy|,

where S m = sup x R | S (x)|.

Since, W ¯ is continuous on the compact interval J, it is bounded there by C>0 and:

| f ( t , x ) f ( t , y ) | (α+C S m )|xy|.

 □

We can extend this result to the whole time real line if I and W ¯ are continuous on R.

Proposition 3.1.1 If I ext and W ¯ are continuous on R + , then for all V 0 inR, there exists a unique solution of (7) defined on R + such thatV(0)= V 0 .

Proof We have already shown the following inequality:

| f ( t , x ) f ( t , y ) | α|xy|+ | W ¯ ( t ) | S m |xy|.

Then f is locally Lipschitz with respect to its second argument. Let V be a maximal solution on J 0 and we denote by β the upper bound of J 0 . We suppose that β<+. Then we have for all t0:

V ( t ) = e α t V 0 + 0 t e α ( t u ) W ¯ ( u ) S ( V ( u ) ) d u + 0 t e α ( t u ) I ext ( u ) d u | V ( t ) | | V 0 | + S m 0 β e α u | W ¯ ( u ) | d u + 0 β e α u | I ext ( u ) | d u t [ 0 , β ] ,

where S m = sup x R |S(x)|.

This implies that the maximal solution V is bounded for all t[0,β], but Theorem C.0.2 of Appendix C ensures that it is impossible. Then, it follows that necessarily β=+. □

3.1.2 Simplification of (6) in a special case

Invariance In the previous section, we have stated that in the special case where W was a function of the distance between two points in D× R + , then W ¯ (z,Δ,t) did not depend upon the variables (z,Δ). As already said in the previous section, the following result holds (see proof of Lemma D.0.2 of Appendix D).

Lemma 3.1.1 Suppose that W is a function of d 0 (T, T )only. Then W ¯ does not depend upon the variableT.

Mexican hat connectivity In this paragraph, we push further the computation of W ¯ in the special case where W does not depend upon the time variable t and takes the special form suggested by Amari in [20], commonly referred to as the ‘Mexican hat’ connectivity. It features center excitation and surround inhibition which is an effective model for a mixed population of interacting inhibitory and excitatory neurons with typical cortical connections. It is also only a function of d 0 (T, T ).

In detail, we have:

W(z,Δ, z Δ )=w ( 2 ( log Δ log Δ ) 2 + d 2 2 ( z , z ) ) ,

where:

w(x)= 1 2 π σ 1 2 e x 2 σ 1 2 A 2 π σ 2 2 e x 2 σ 2 2

with 0 σ 1 σ 2 and 0A1.

In this case we can obtain a very simple closed-form formula for W ¯ as shown in the following lemma.

Lemma 3.1.2 When W is the specific Mexican hat function just defined then:

W ¯ = π 3 2 2 ( σ 1 e 2 σ 1 2 erf ( 2 σ 1 ) A σ 2 e 2 σ 2 2 erf ( 2 σ 2 ) ) ,
(8)

where erf is the error function defined as:

erf(x)= 2 π 0 x e u 2 du.

Proof The proof is given in Lemma E.0.3 of Appendix E. □

3.2 General solution

We now present the main result of this section about the existence and uniqueness of solutions of equation (2). We first introduce some hypotheses on the connectivity function W. We present them in two ways: first on the set of structure tensors considered as the set SPD(2), then on the set of tensors seen as D× R + . Let J be a subinterval of R. We assume that:

  • (H1): (T, T ,t)SPD(2)×SPD(2)×J, W(T, T ,t)W( d 0 (T, T ),t),

  • (H2): WC(J, L 1 (SPD(2))) where W is defined as W(T,t)=W( d 0 (T, Id 2 ),t) for all (T,t)SPD(2)×J where Id 2 is the identity matrix of M 2 (R),

  • (H3): tJ, sup t J W ( t ) L 1 <+ where W ( t ) L 1 = def SPD ( 2 ) |W( d 0 (T, Id 2 ),t)|dT.

Equivalently, we can express these hypotheses in (z,Δ) coordinates:

  • (H1 bis): (z, z ,Δ, Δ ,t) D 2 × ( R + ) 2 ×R, W(z,Δ, z , Δ ,t)W( d 2 (z, z ),|log(Δ)log( Δ )|,t),

  • (H2 bis): WC(J, L 1 (D× R + )) where W is defined as W(z,Δ,t)=W( d 2 (z,0),|log(Δ)|,t) for all (z,Δ,t)D× R + ×J,

  • (H3 bis): tJ, sup t J W ( t ) L 1 <+ where

    W ( t ) L 1 = def D × R + | W ( d 2 ( z , 0 ) , | log ( Δ ) | , t ) | d Δ Δ dm(z).

3.2.1 Functional space setting

We introduce the following mapping f g :(t,ϕ) f g (t,ϕ) such that:

f g (t,ϕ)(z,Δ)= D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | , t ) S ( ϕ ( z , Δ ) ) d Δ Δ dm( z ).
(9)

Our aim is to find a functional space F where (3) is well-defined and the function f g maps F to F for all t s. A natural choice would be to choose ϕ as a L p (D× R + )-integrable function of the space variable with 1p<+. Unfortunately, the homogeneous solutions (constant with respect to (z,Δ)) do not belong to that space. Moreover, a valid model of neural networks should only produce bounded membrane potentials. That is why we focus our choice on the functional space F= L (D× R + ). As D× R + is an open set of R 3 , F is a Banach space for the norm: ϕ F = sup z D sup Δ R + |ϕ(z,Δ)|.

Proposition 3.2.1 If I ext C(J,F)with sup t J I ext ( t ) F <+and W satisfies hypotheses (H 1bis)-(H 3bis) then f g is well-defined and is fromJ×FtoF.

Proof(z,Δ,t)D× R + ×R, we have:

| D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | , t ) S ( ϕ ( z , Δ ) ) d Δ Δ dm ( z ) | S m sup t J W ( t ) L 1 < + .

 □

3.2.2 The existence and uniqueness of a solution of (3)

We rewrite (3) as a Cauchy problem:

{ t V ( z , Δ , t ) = α V ( z , Δ , t ) t V ( z , Δ , t ) = + D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | , t ) t V ( z , Δ , t ) = + D × R + × S ( V ( z , Δ , t ) ) d Δ Δ dm ( z ) t V ( z , Δ , t ) = + I ext ( z , Δ , t ) , V ( z , Δ , 0 ) = V 0 ( z , Δ ) .
(10)

Theorem 3.2.1 If the external current I ext belongs toC(J,F)with J an open interval containing 0 and W satisfies hypotheses (H 1bis)-(H 3bis), then fo all V 0 F, there exists a unique solution of (10) defined on a subinterval J 0 of J containing 0 such thatV(z,Δ,0)= V 0 (z,Δ)for all(z,Δ)D× R + .

Proof We prove that f g is continuous on J×F. We have

and therefore

f g ( t , ϕ ) f g ( s , ψ ) F S m sup t J W ( t ) L 1 ϕ ψ F + S m W ( t ) W ( s ) L 1 .

Because of condition (H2) we can choose |ts| small enough so that W ( t ) W ( s ) L 1 is arbitrarily small. This proves the continuity of f g . Moreover it follows from the previous inequality that:

f g ( t , ϕ ) f g ( t , ψ ) F S m W 0 g ϕ ψ F

with W 0 g = sup t J W ( t ) L 1 . This ensures the Lipschitz continuity of f g with respect to its second argument, uniformly with respect to the first. The Cauchy-Lipschitz theorem on a Banach space yields the conclusion. □

Remark 3.2.1 Our result is quite similar to those obtained by Potthast and Graben in[21]. The main differences are that first we allow the connectivity function to depend upon the time variable t and second that our space features is no longer a R n but a Riemanian manifold. In their article Potthast and Graben also work with a different functional space by assuming more regularity for the connectivity function W and then obtain more regularity for their solutions.

Proposition 3.2.2 If the external current I ext belongs toC( R + ,F)and W satisfies hypotheses (H 1bis)-(H 3bis) withJ= R + , then for all V 0 F, there exists a unique solution of (10) defined on R + such thatV(z,Δ,0)= V 0 (z,Δ)for all(z,Δ)D× R + .

Proof We have just seen in the previous proof that f g is globally Lipschitz with respect to its second argument:

f g ( t , ϕ ) f g ( t , ψ ) F S m W 0 g ϕ ψ F

then Theorem C.0.3 of Appendix C gives the conclusion. □

3.2.3 The intrinsic boundedness of a solution of (3)

In the same way as in the homogeneous case, we show a result on the boundedness of a solution of (3).

Proposition 3.2.3 If the external current I ext belongs toC( R + ,F)and is bounded in time sup t R + I ext ( t ) F <+and W satisfies hypotheses (H 1bis)-(H 3bis) withJ= R + , then the solution of (10) is bounded for each initial condition V 0 F.

Let us set:

ρ g = def 2 α ( S m W 0 g + sup t R + I ext ( t ) F ) ,

where W 0 g = sup t R + W ( t ) L 1 .

Proof Let V be a solution defined on R + . Then we have for all t R + :

The following upperbound holds

V ( t ) F e α t V 0 F + 1 α ( S m W 0 g + sup t R + I ext ( t ) F ) ( 1 e α t ) .
(11)

We can rewrite (11) as:

V ( t ) F e α t ( V 0 F 1 α ( S m W 0 g + sup t R + I ext ( t ) F ) ) + 1 α ( S m W 0 + g sup t R + I ext ( t ) F ) = e α t ( V 0 F ρ g 2 ) + ρ g 2 .
(12)

If V 0 B ρ g this implies V ( t ) F ρ g 2 (1+ e α t ) for all t>0 and hence V ( t ) F < ρ g for all t>0, proving that B ρ is stable. Now assume that V ( t ) F > ρ g for all t0. The inequality (12) shows that for t large enough this yields a contradiction. Therefore there exists t 0 >0 such that V ( t 0 ) F = ρ g . At this time instant we have

ρ g e α t 0 ( V 0 F ρ g 2 ) + ρ g 2 ,

and hence

t 0 1 α log ( 2 V 0 F ρ g ρ g ) .

 □

The following corollary is a consequence of the previous proposition.

Corollary 3.2.1 If V 0 B ρ g and T g =inf{t>0such thatV(t) B ρ g }then:

T g 1 α log ( 2 V 0 F ρ g ρ g ) .

3.3 Semi-homogeneous solutions

A semi-homogeneous solution of (3) is defined as a solution which does not depend upon the variable Δ. In other words, the populations of neurons is not sensitive to the determinant of the structure tensor, that is to the contrast of the image intensity. The neural mass equation is then equivalent to the neural mass equation for tensors of unit determinant. We point out that semi-homogeneous solutions were previously introduced in [7] where a bifurcation analysis of what they called H-planforms was performed. In this section, we define the framework in which their equations make sense without giving any proofs of our results as it is a direct consequence of those proven in the general case. We rewrite equation (3) in the case of semi-homogeneous solutions:

{ t V ( z , t ) = α V ( z , t ) + D W sh ( z , z , t ) S ( V ( z , t ) ) dm ( z ) t V ( z , t ) = + I ext ( z , t ) , t > 0 , V ( z , 0 ) = V 0 ( z ) ,
(13)

where

W sh (z, z ,t)= 0 + W(z,Δ, z , Δ ,t) d Δ Δ .

We have implicitly made the assumption, that W sh does not depend on the coordinate Δ. Some conditions under which this assumption is satisfied are described below and are the direct transductions of those of the general case in the context of semi-homogeneous solutions.

Let J be an open interval of R. We assume that:

  • (C1): (z, z ,t) D 2 ×J, W sh (z, z ,t) w sh ( d 2 (z, z ),t),

  • (C2): W sh C(J, L 1 (D)) where W sh is defined as W sh (z,t)= w sh ( d 2 (z,0),t) for all (z,t)D×J,

  • (C3): sup t J W sh ( t ) L 1 <+ where W sh ( t ) L 1 = def D | W sh ( d 2 (z,0),t)|dm(z).

Note that conditions (C1)-(C2) and Lemma 3.1.1 imply that for all zD, D | W sh (z, z ,t)|dm( z )= W sh ( t ) L 1 . And then, for all zD, the mapping z W sh (z, z ,t) is integrable on D.

From now on, F= L (D) and the Fischer-Riesz’s theorem ensures that L (D) is a Banach space for the norm: ψ =inf{C0,|ψ(z)|Cfor almost everyzD}.

Theorem 3.3.1 If the external current I ext belongs toC(J,F)with J an open interval containing 0 and W sh satisfies conditions (C 1)-(C 3), then for all V 0 F, there exists a unique solution of (13) defined on a subinterval J 0 of J containing 0.

This solution, defined on the subinterval J of R can in fact be extended to the whole real line, and we have the following proposition.

Proposition 3.3.1 If the external current I ext belongs toC( R + ,F)and W sh satisfies conditions (C 1)-(C 3) withJ= R + , then for all V 0 F, there exists a unique solution of (13) defined on R + .

We can also state a result on the boundedness of a solution of (13):

Proposition 3.3.2 Letρ = def 2 α ( S m W 0 sh + sup t R + I ( t ) F ), with W 0 sh = sup t J W sh ( t ) L 1 . The open ball B ρ ofFof center 0 and radius ρ is stable under the dynamics of equation (13). Moreover it is an attracting set for this dynamics and if V 0 B ρ andT=inf{t>0such thatV(t) B ρ }then:

T 1 α log ( 2 V 0 F ρ ρ ) .

4 Stationary solutions

We look at the equilibrium states, noted V μ 0 of (3), when the external input I and the connectivity W do not depend upon the time. We assume that W satisfies hypotheses (H1 bis)-(H2 bis). We redefine for convenience the sigmoidal function to be:

S(x)= 1 1 + e x ,

so that a stationary solution (independent of time) satisfies:

0 = α V μ 0 ( z , Δ ) + D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) S ( μ V μ 0 ( z , Δ ) ) d Δ Δ dm ( z ) + I ext ( z , Δ ) .
(14)

We define the nonlinear operator from F to F, noted G μ , by:

G μ (V)(z,Δ)= D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) S ( μ V ( z , Δ ) ) d Δ Δ dm( z ).
(15)

Finally, (14) is equivalent to:

α V μ 0 (z,Δ)= G μ (V)(z,Δ)+ I ext (z,Δ).

4.1 Study of the nonlinear operator G μ

We recall that we have set for the Banach space F= L (D× R + ) and Proposition 3.2.1 shows that G μ :FF. We have the further properties:

Proposition 4.1.1 G μ satisfies the following properties:

  • G μ ( V 1 ) G μ ( V 2 ) F μ W 0 g S m V 1 V 2 F

    for allμ0,

  • μ G μ

    is continuous on R + .

Proof The first property was shown to be true in the proof of Theorem 3.3.1. The second property follows from the following inequality:

G μ 1 ( V ) G μ 2 ( V ) F | μ 1 μ 2 | W 0 g S m V F .

 □

We denote by G l and G the two operators from F to F defined as follows for all VF and all (z,Δ)D× R + :

G l (V)(z,Δ)= D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) V( z , Δ ) d Δ Δ dm( z ),
(16)

and

G (V)(z,Δ)= D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) H ( V ( z , Δ ) ) d Δ Δ dm( z ),

where H is the Heaviside function.

It is straightforward to show that both operators are well-defined on F and map F to F. Moreover the following proposition holds.

Proposition 4.1.2 We have

G μ μ G .

Proof It is a direct application of the dominated convergence theorem using the fact that:

S(μy) μ H(y)a.e.yR.

 □

4.2 The convolution form of the operator G μ in the semi-homogeneous case

It is convenient to consider the functional space F sh = L (D) to discuss semi-homogeneous solutions. A semi-homogeneous persistent state of (3) is deduced from (14) and satisfies:

α V μ 0 (z)= G μ sh ( V μ 0 ) (z)+ I ext (z),
(17)

where the nonlinear operator G μ sh from F sh to F sh is defined for all V F sh and zD by:

G μ sh (V)(z)= D W sh ( d 2 ( z , z ) ) S ( μ V ( z ) ) dm( z ).

We define the associated operators, G l sh , G sh :

G l sh ( V ) ( z ) = D W sh ( d 2 ( z , z ) ) V ( z ) dm ( z ) , G sh ( V ) ( z ) = D W sh ( d 2 ( z , z ) ) H ( V ( z ) ) dm ( z ) .

We rewrite the operator G μ sh in a convenient form by using the convolution in the hyperbolic disk. First, we define the convolution in a such space. Let O denote the center of the Poincaré disk that is the point represented by z=0 and dg denote the Haar measure on the group G=SU(1,1) (see [22] and Appendix A), normalized by:

G f(gO)dg = def D f(z)dm(z),

for all functions of L 1 (D). Given two functions f 1 , f 2 in L 1 (D) we define the convolution by:

( f 1 f 2 )(z)= G f 1 (gO) f 2 ( g 1 z ) dg.

We recall the notation W sh (z) = def W sh ( d 2 (z,O)).

Proposition 4.2.1 For allμ0andV F sh we have:

G μ sh ( V ) = W sh S ( μ V ) , G l sh ( V ) = W sh V and G sh ( V ) = W sh H ( V ) .
(18)

Proof We only prove the result for G μ . Let zD, then:

G μ sh ( V ) ( z ) = D W sh ( d 2 ( z , z ) ) S ( μ V ( z ) ) dm ( z ) = G W sh ( d 2 ( z , g O ) ) S ( μ V ( g O ) ) d g = G W sh ( d 2 ( g g 1 z , g O ) ) S ( μ V ( g O ) ) d g

and for all gSU(1,1), d 2 (z, z )= d 2 (gz,g z ) so that:

G μ sh (V)(z)= G W sh ( d 2 ( g 1 z , O ) ) S ( μ V ( g O ) ) dg= W sh S(μV)(z).

 □

Let b be a point on the circle D. For zD, we define the ‘inner product’ z,b to be the algebraic distance to the origin of the (unique) horocycle based at b through z (see [7]). Note that z,b does not depend on the position of z on the horocycle. The Fourier transform in D is defined as (see [22]):

h ˜ (λ,b)= D h(z) e ( i λ + 1 ) z , b dm(z)(λ,b)R×D

for a function h:DC such that this integral is well-defined.

Lemma 4.2.1 The Fourier transform inD, W ˜ sh (λ,b)of W sh does not depend upon the variablebD.

Proof For all λR and b= e i θ D,

W ˜ sh (λ,b)= D W sh (z) e ( i λ + 1 ) z , b dm(z).

We recall that for all ϕR r ϕ is the rotation of angle ϕ and we have W sh ( r ϕ z)= W sh (z), dm(z)=dm( r ϕ z) and z,b= r ϕ z, r ϕ b, then:

W ˜ sh ( λ , b ) = D W sh ( r θ z ) e ( i λ + 1 ) r θ z , 1 dm ( z ) = D W sh ( z ) e ( i λ + 1 ) z , 1 dm ( z ) = def W ˜ sh ( λ ) .

 □

We now introduce two functions that enjoy some nice properties with respect to the Hyperbolic Fourier transform and are eigenfunctions of the linear operator G l sh .

Proposition 4.2.2 Let e λ , b (z)= e ( i λ + 1 ) z , b and Φ λ (z)= D e ( i λ + 1 ) z , b dbthen:

  • G l sh ( e λ , b )= W ˜ sh (λ) e λ , b

    ,

  • G l sh ( Φ λ )= W ˜ sh (λ) Φ λ

    .

Proof We begin with b=1D and use the horocyclic coordinates. We use the same changes of variables as in Lemma 3.1.1:

G l sh ( e λ , 1 ) ( n s a t O ) = R 2 W sh ( d 2 ( n s a t O , n s a t O ) ) e ( i λ 1 ) t d t d s = R 2 W sh ( d 2 ( n s s a t O , a t O ) ) e ( i λ 1 ) t d t d s = R 2 W sh ( d 2 ( a t n x O , a t O ) ) e ( i λ 1 ) t + 2 t d t d x = R 2 W sh ( d 2 ( O , n x a t t O ) ) e ( i λ 1 ) t + 2 t d t d x = R 2 W sh ( d 2 ( O , n x a y O ) ) e ( i λ 1 ) ( y + t ) + 2 t d y d x = e ( i λ + 1 ) n s a t O , 1 W ˜ sh ( λ ) .

By rotation, we obtain the property for all bD.

For the second property [22], Lemma 4.7] shows that:

W sh Φ λ (z)= D e ( i λ + 1 ) z , b W ˜ sh (λ)db= Φ λ (z) W ˜ sh (λ).

 □

A consequence of this proposition is the following lemma.

Lemma 4.2.2 The linear operator G l sh is not compact and for allμ0, the nonlinear operator G μ sh is not compact.

Proof The previous Proposition 4.2.2 shows that G l sh has a continuous spectrum which iimplies that is not a compact operator.

Let U be in F sh , for all V F sh we differentiate G μ sh and compute its Frechet derivative:

D ( G μ sh ) U (V)(z)= D W sh ( d 2 ( z , z ) ) S ( U ( z ) ) V( z )dm( z ).

If we assume further that U does not depend upon the space variable z, U(z)= U 0 we obtain:

D ( G μ sh ) U 0 (V)(z)= S ( U 0 ) G l sh (V)(z).

If G μ sh was a compact operator then its Frechet derivative D ( G μ sh ) U 0 would also be a compact operator, but it is impossible. As a consequence, G μ sh is not a compact operator. □

4.3 The convolution form of the operator G μ in the general case

We adapt the ideas presented in the previous section in order to deal with the general case. We recall that if H is the group of positive real numbers with multiplication as operation, then the Haar measure dh is given by d x x . For two functions f 1 , f 2 in L 1 (D× R + ) we define the convolution by:

( f 1 f 2 )(z,Δ) = def G H f 1 (gO,h1) f 2 ( g 1 z , h 1 Δ ) dgdh.

We recall that we have set by definition: W(z,Δ)=W( d 2 (z,0),|log(Δ)|).

Proposition 4.3.1 For allμ0andVFwe have:

G μ (V)=WS(μV), G l (V)=WVand G (V)=WH(V).
(19)

Proof Let (z,Δ) be in D× R + . We follow the same ideas as in Proposition 4.2.1 and prove only the first result. We have

G μ ( V ) ( z , Δ ) = D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) S ( μ V ( z , Δ ) ) d Δ Δ dm ( z ) = G R + W ( d 2 ( g 1 z , O ) , | log ( Δ Δ ) | ) S ( μ V ( g O , Δ ) ) d g d Δ Δ = G H W ( d 2 ( g 1 z , O ) , | log ( h 1 Δ ) | ) S ( μ V ( g O , h 1 ) ) d g d h = W S ( μ V ) ( z , Δ ) .

 □

We next assume further that the function W is separable in z, Δ and more precisely that W(z,Δ)= W 1 (z) W 2 (log(Δ)) where W 1 (z)= W 1 ( d 2 (z,0)) and W 2 (log(Δ))= W 2 (|log(Δ)|) for all (z,Δ)D× R + . The following proposition is an echo to Proposition 4.2.2.

Proposition 4.3.2 Let e λ , b (z)= e ( i λ + 1 ) z , b , Φ λ (z)= D e ( i λ + 1 ) z , b dband h ξ (Δ)= e i ξ log ( Δ ) then:

  • G l ( e λ , b h ξ )= W ˜ 1 (λ) W ˆ 2 (ξ) e λ , b h ξ

    ,

  • G l ( Φ λ h ξ )= W ˜ 1 (λ) W ˆ 2 (ξ) Φ λ h ξ

    ,

where W ˆ 2 is the usual Fourier transform of W 2 .

Proof The proof of this proposition is exactly the same as for Proposition 4.2.2. Indeed:

G l ( e λ , b h ξ ) ( z , Δ ) = W 1 e λ , b ( z ) R + W 2 ( log ( Δ Δ ) ) e i ξ log ( Δ ) d Δ Δ = W 1 e λ , b ( z ) ( R W 2 ( y ) e i ξ y d y ) e i ξ log ( Δ ) .

 □

A straightforward consequence of this proposition is an extension of Lemma 4.2.2 to the general case:

Lemma 4.3.1 The linear operator G l sh is not compact and for allμ0, the nonlinear operator G μ sh is not compact.

4.4 The set of the solutions of (14)

Let B μ be the set of the solutions of (14) for a given slope parameter μ:

B μ = { V F | α V + G μ ( V ) + I ext = 0 } .

We have the following proposition.

Proposition 4.4.1 If the input current I ext is equal to a constant I ext 0 , that is, does not depend upon the variables(z,Δ)then for allμ R + , B μ . In the general case I ext F, if the conditionμ S m W 0 g <αis satisfied, thenCard( B μ )=1.

Proof Due to the properties of the sigmoid function, there always exists a constant solution in the case where I ext is constant. In the general case where I ext F, the statement is a direct application of the Banach fixed point theorem, as in [23]. □

Remark 4.4.1 If the external input does not depend upon the variables(z,Δ)and if the conditionμ S m W 0 g <αis satisfied, then there exists a unique stationary solution by application of Proposition 4.4.1. Moreover, this stationary solution does not depend upon the variables(z,Δ)because there always exists one constant stationary solution when the external input does not depend upon the variables(z,Δ). Indeed equation (14) is then equivalent to:

0 = α V 0 + β S ( V 0 ) + I ext 0 where β = D × R + W ( d 2 ( z , z ) , | log ( Δ Δ ) | ) d Δ Δ dm ( z )

and β does not depend upon the variables(z,Δ)because of Lemma 3.1.1. Because of the property of the sigmoid function S, the equation0=α V 0 +βS( V 0 )+ I ext 0 has always one solution.

If on the other hand the input current does depend upon these variables is invariant under the action of a subgroup ofU(1,1)the group of the isometries ofD (see Appendix A), and the conditionμ S m W 0 g <αis satisfied then the unique stationary solution will also be invariant under the action of the same subgroup. We refer the interested reader to our work[15]on equivariant bifurcation of hyperbolic planforms on the subject.

When the conditionμ S m W 0 g <αis satisfied we call primary stationary solution the unique solution in B μ .

4.5 Stability of the primary stationary solution

In this subsection we show that the condition μ S m W 0 g <α guarantees the stability of the primary stationary solution to (3).

Theorem 4.5.1 We suppose thatIFand that the conditionμ S m W 0 g <αis satisfied, then the associated primary stationary solution of (3) is asymtotically stable.

Proof Let V μ 0 be the primary stationary solution of (3), as μ S m W 0 g <α is satisfied. Let also V μ be the unique solution of the same equation with some initial condition V μ (0)=ϕF, see Theorem 3.3.1. We introduce a new function X= V μ V μ 0 which satisfies:

where W m ( d 2 (z, z ),|log( Δ Δ )|)= S m W( d 2 (z, z ),|log( Δ Δ )|) and the vector Θ(X(z,Δ,t)) is given by Θ(X(z,Δ,t))= S ̲ (μ V μ (z,Δ,t)) S ̲ (μ V μ 0 (z,Δ)) with S ̲ = ( S m ) 1 S. We note that, because of the definition of Θ and the mean value theorem |Θ(X(z,Δ,t))|μ|X(z,Δ,t)|. This implies that |Θ(r)||r| for all rR.

If we set: G(t)= e α t X ( t ) , then we have:

G(t)G(0)+μ W 0 g S m 0 t G(u)du

and G is continuous for all t0. The Gronwall inequality implies that:

G ( t ) G ( 0 ) e μ W 0 g S m t X ( t ) e ( μ W 0 g S m α ) t X ( 0 ) ,

and the conclusion follows. □

5 Spatially localised bumps in the high gain limit

In many models of working memory, transient stimuli are encoded by feature-selective persistent neural activity. Such stimuli are imagined to induce the formation of a spatially localised bump of persistent activity which coexists with a stable uniform state. As an example, Camperi and Wang [24] have proposed and studied a network model of visuo-spatial working memory in prefontal cortex adapted from the ring model of orientation of Ben-Yishai and colleagues [1]. Many studies have emerged in the past decades to analyse these localised bumps of activity [2529], see the paper by Coombes for a review of the domain [30]. In [25, 26, 28], the authors have examined the existence and stability of bumps and multi-bumps solutions to an integro-differential equation describing neuronal activity along a single spatial domain. In [27, 29] the study is focused on the two-dimensional model and a method is developed to approximate the integro-differential equation by a partial differential equation which makes possible the determination of the stability of circularly symmetric solutions. It is therefore natural to study the emergence of spatially localized bumps for the structure tensor model in a hypercolumn of V1. We only deal with the reduced case of equation (13) which means that the membrane activity does not depend upon the contrast of the image intensity, keeping the general case for future work.

In order to construct exact bump solutions and to compare our results to previous studies [2529], we consider the high gain limit μ of the sigmoid function. As above we denote by H the Heaviside function defined by H(x)=1 for x0 and H(x)=0 otherwise. Equation (13) is rewritten as:

t V ( z , t ) = α V ( z , t ) + D W ( z , z ) H ( V ( z , t ) κ ) dm ( z ) + I ( z , t ) = α V ( z , t ) + { z D | V ( z , t ) κ } W ( z , z ) dm ( z ) + I ( z ) .
(20)

We have introduced a threshold κ to shift the zero of the Heaviside function. We make the assumption that the system is spatially homogeneous that is, the external input I does not depend upon the variables t and the connectivity function depends only on the hyperbolic distance between two points of D:W(z, z )=W( d 2 (z, z )). For illustrative purposes, we will use the exponential weight distribution as a specific example throughout this section:

W(z, z )=W ( d 2 ( z , z ) ) =exp ( d 2 ( z , z ) b ) .
(21)

The theoretical study of equation (20) has been done in [21] where the authors have imposed strong regularity assumptions on the kernel function W, such as Hölder continuity, and used compactness arguments and integral equation techniques to obtain a global existence result of solutions to (20). Our approach is very different, we follow that of [2529, 31] by proceeding in a constructive fashion. In a first part, we define what we call a hyperbolic radially symmetric bump and present some preliminary results for the linear stability analysis of the last part. The second part is devoted to the proof of a technical Theorem 5.1.1 which is stated in the first part. The proof uses results on the Fourier transform introduced in Section 4, hyperbolic geometry and hypergeometric functions. Our results will be illustrated in the following Section 6.

5.1 Existence of hyperbolic radially symmetric bumps

From equation (20) a general stationary pulse satisfies the equation:

αV(z)= { z D | V ( z ) κ } W(z, z )dm( z )+ I ext (z).

For convenience, we note M(z,K) the integral K W(z, z )dm( z ) with K={zD|V(z)κ}. The relation V(z)=κ holds for all zK.

Definition 5.1.1 V is called a hyperbolic radially symmetric stationary-pulse solution of (20) if V depends only upon the variable r and is such that:

V ( r ) > κ , r [ 0 , ω [ , V ( ω ) = κ , V ( r ) < κ , r ] ω , [ , V ( ) = 0 ,

and is a fixed point of equation (20):

αV(r)=M(r,ω)+ I ext (r),
(22)

where I ext (r)=I e r 2 2 σ 2 is a Gaussian input andM(r,ω)is defined by the following equation:

M(r,ω) = def M ( z , B h ( 0 , ω ) )

and B h (0,ω)is a hyperbolic disk centered at the origin of hyperbolic radius ω.

From symmetry arguments there exists a hyperbolic radially symmetric stationary-pulse solution V(r) of (20), furthermore the threshold κ and width ω are related according to the self-consistency condition

ακ=M(ω)+ I ext (ω) = def N(ω),
(23)

where

M(ω) = def M(ω,ω).

The existence of such a bump can then be established by finding solutions to (23) The function N(ω) is plotted in Figure 1 for a range of the input amplitude I. The horizontal dashed lines indicate different values of ακ, the points of intersection determine the existence of stationary pulse solutions. Qualitatively, for sufficiently large input amplitude I we have N (0)<0 and it is possible to find only one solution branch for large ακ. For small input amplitudes I we have N (0)>0 and there always exists one solution branch for αβ< γ c 0.06. For intermediate values of the input amplitude I, as αβ varies, we have the possiblity of zero, one or two solutions. Anticipating the stability results of Section 5.3, we obtain that when N (ω)<0 then the corresponding solution is stable.

Fig. 1
figure 1

Plot of N(ω) defined in (23) as a function of the pulse width ω for several values of the input amplitude I and for a fixed input width σ=0.05. The horizontal dashed lines indicate different values of ακ. The connectivity function is given in equation (21) and the parameter b is set to b=0.2.

We end this subsection with the usefull and technical following formula.

Theorem 5.1.1 For all(r,ω) R + × R + :

M(r,ω)= 1 4 sinh ( ω ) 2 cosh ( ω ) 2 R W ˜ (λ) Φ λ ( 0 , 0 ) (r) Φ λ ( 1 , 1 ) (ω)λtanh ( π 2 λ ) dλ,
(24)

where W ˜ (λ) is the Fourier Helgason transform of W(z) = def W( d 2 (z,0)) and

Φ λ ( α , β ) (ω)=F ( 1 2 ( ρ + i λ ) , 1 2 ( ρ i λ ) ; α + 1 ; sinh ( ω ) 2 ) ,

withα+β+1=ρand F is the hypergeometric function of first kind.

Remark 5.1.1 We recall that F admits the integral representation[32]:

F(α,β;γ;z)= Γ ( α ) Γ ( β ) Γ ( γ β ) 0 1 t β 1 ( 1 t ) γ β 1 ( 1 t z ) α dt

with(γ)>(β)>0.

Remark 5.1.2 In Section 4we introduced the function Φ λ (z)= D e ( i λ + 1 ) z , b db. In[22], it is shown that:

Φ λ ( 0 , 0 ) (r)= Φ λ ( tanh ( r ) ) ifz=tanh(r) e i θ .

Remark 5.1.3 Let us point out that this result can be linked to the work of Folias and Bressloff in[31]and then used in[29]. They constructed a two-dimensional pulse for a general radially symmetric synaptic weight function. They obtain a similar formal representation of the integral of the connectivity function w over the diskB(O,a)centered at the origin O and of radius a. Using their notations

M(a,r)= 0 2 π 0 a w ( | r r | ) r d r dθ=2πa 0 w ˘ (ρ) J 0 (rρ) J 1 (aρ)dρ,

where J ν (x)is the Bessel function of the first kind and w ˘ is the real Fourier transform of w. In our case instead of the Bessel function we find Φ λ ( ν , ν ) (r)which is linked to the hypergeometric function of the first kind.

We now show that for a general monotonically decreasing weight function W, the function M(r,ω) is necessarily a monotonically decreasing function of r. This will ensure that the hyperbolic radially symmetric stationary-pulse solution (22) is also a monotonically decreasing function of r in the case of a Gaussian input. The demonstration of this result will directly use Theorem 5.1.1.

Proposition 5.1.1 V is a monotonically decreasing function in r for any monotonically decreasing synaptic weight function W.

Proof Differentiating M with respect to r yields:

M r (r,ω)= 1 2 0 ω 0 2 π r ( W ( d 2 ( tanh ( r ) , tanh ( r ) e i θ ) ) ) sinh(2 r )d r dθ.

We have to compute

r ( W ( d 2 ( tanh ( r ) , tanh ( r ) e i θ ) ) ) = W ( d 2 ( tanh ( r ) , tanh ( r ) e i θ ) ) r ( d 2 ( tanh ( r ) , tanh ( r ) e i θ ) ) .

It is result of elementary hyperbolic trigonometry that

d 2 ( tanh ( r ) , tanh ( r ) e i θ ) = tanh 1 ( tanh ( r ) 2 + tanh ( r ) 2 2 tanh ( r ) tanh ( r ) cos ( θ ) 1 + tanh ( r ) 2 tanh ( r ) 2 2 tanh ( r ) tanh ( r ) cos ( θ ) )
(25)

we let ρ=tanh(r), ρ =tanh( r ) and define

F ρ , θ (ρ)= ρ 2 + ρ 2 2 ρ ρ cos ( θ ) 1 + ρ 2 ρ 2 2 ρ ρ cos ( θ ) .

It follows that

ρ tanh 1 ( F ρ , θ ( ρ ) ) = ρ F ρ , θ ( ρ ) 2 ( 1 F ρ , θ ( ρ ) ) F ρ , θ ( ρ ) ,

and

ρ F ρ , θ (ρ)= 2 ( ρ ρ cos ( θ ) ) + 2 ρ ρ ( ρ ρ cos ( θ ) ) ( 1 + ρ 2 ρ 2 2 ρ ρ cos ( θ ) ) 2 .

We conclude that if ρ>tanh(ω) then for all 0 ρ tanh(ω) and 0θ2π

2 ( ρ ρ cos ( θ ) ) +2ρ ρ ( ρ ρ cos ( θ ) ) >0,

which implies M(r,ω)<0 for r>ω, since W <0.

To see that it is also negative for r<ω, we differentiate equation (24) with respect to r:

M r ( r , ω ) = 1 4 sinh ( ω ) 2 cosh ( ω ) 2 × R W ˜ ( λ ) r Φ λ ( 0 , 0 ) ( r ) Φ λ ( 1 , 1 ) ( ω ) λ tanh ( π 2 λ ) d λ .

The following formula holds for the hypergeometric function (see Erdelyi in [32]):

d d z F(a,b;c;z)= a b c F(a+1,b+1;c+1;z).

It implies

r Φ λ ( 0 , 0 ) (r)= 1 2 sinh(r)cosh(r) ( 1 + λ 2 ) Φ λ ( 1 , 1 ) (r).

Substituting in the previous equation giving M r we find:

M r ( r , ω ) = 1 64 sinh ( 2 ω ) 2 sinh ( 2 r ) × R W ˜ ( λ ) ( 1 + λ 2 ) Φ λ ( 1 , 1 ) ( r ) Φ λ ( 1 , 1 ) ( ω ) λ tanh ( π 2 λ ) d λ ,

implying that:

sgn ( M r ( r , ω ) ) =sgn ( M r ( ω , r ) ) .

Consequently, M r (r,ω)<0 for r<ω. Hence V is monotonically decreasing in r for any monotonically decreasing synaptic weight function W. □

As a consequence, for our particular choice of exponential weight function (21), the radially symmetric bump is monotonically decreasing in r, as it will be recover in our numerical experiments in Section 6.

5.2 Proof of Theorem 5.1.1

The proof of Theorem 5.1.1 goes in four steps. First we introduce some notations and recall some basic properties of the Fourier transform in the Poincaré disk. Second we prove two propositions. Third we state a technical lemma on hypergeometric functions, the proof being given in Lemma F.0.4 of Appendix F. The last step is devoted to the conclusion of the proof.

5.2.1 First step

In order to calculate M(r,ω), we use the Fourier transform in D which has already been introduced in Section 4. First we rewrite M(r,ω) as a convolution product:

Proposition 5.2.1 For all(r,ω) R + × R + :

M(r,ω)= 1 4 π R W ˜ (λ) Φ λ 1 B h ( 0 , ω ) (z)λtanh ( π 2 λ ) dλ.
(26)

Proof We start with the definition of M(r,ω) and use the convolutional form of the integral:

M ( r , ω ) = M ( z , B h ( 0 , ω ) ) = B h ( 0 , ω ) W ( z , z ) dm ( z ) = D W ( z , z ) 1 B h ( 0 , ω ) ( z ) dm ( z ) = W 1 B h ( 0 , ω ) ( z ) .

In [22], Helgason proves an inversion formula for the hyperbolic Fourier transform and we apply this result to W:

W ( z ) = 1 4 π R D W ˜ ( λ , b ) e ( i λ + 1 ) z , b λ tanh ( π 2 λ ) d λ d b = 1 4 π R W ˜ ( λ ) ( D e ( i λ + 1 ) z , b d b ) λ tanh ( π 2 λ ) d λ

the last equality is a direct application of Lemma 4.2.1 and we can deduce that

W(z)= 1 4 π R W ˜ (λ) Φ λ (z)λtanh ( π 2 λ ) dλ.
(27)

Finally we have:

M(r,ω)=W 1 B h ( 0 , ω ) (z)= 1 4 π R W ˜ (λ) Φ λ 1 B h ( 0 , ω ) (z)λtanh ( π 2 λ ) dλ.

which is the desired formula. □

It appears that the study of M(r,ω) consists in calculating the convolution product Φ λ 1 B h ( 0 , ω ) (z).

Proposition 5.2.2 For allz=kOforkG=SU(1,1)we have:

Φ λ 1 B h ( 0 , ω ) (z)= B h ( 0 , ω ) Φ λ ( k 1 z ) dm( z ).

Proof Let z=kO for kG we have:

Φ λ 1 B h ( 0 , ω ) ( z ) = G 1 B h ( 0 , ω ) ( g O ) Φ λ ( g 1 z ) d g = G 1 B h ( 0 , ω ) ( g O ) Φ λ ( g 1 k O ) d g

for all g,kG, Φ λ ( g 1 kO)= Φ λ ( k 1 gO) so that:

Φ λ 1 B h ( 0 , ω ) ( z ) = G 1 B h ( 0 , ω ) ( g O ) Φ λ ( k 1 g O ) d g = D 1 B h ( 0 , ω ) ( z ) Φ λ ( k 1 z ) dm ( z ) = B h ( 0 , ω ) Φ λ ( k 1 z ) dm ( z ) .

 □

5.2.2 Second step

In this part, we prove two results:

  • the mapping z=kO=tanh(r) e i θ Φ λ 1 B h ( 0 , ω ) (z) is a radial function, that is, it depends only upon the variable r.

  • the following equality holds for z=tanh(r) e i θ :

    Φ λ 1 B h ( 0 , ω ) (z)= Φ λ ( a r O) B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm( z ).

Proposition 5.2.3 Ifz=kOand z is writtentanh(r) e i θ withr= d 2 (z,O)in hyperbolic polar coordinates the function Φ λ 1 B h ( 0 , ω ) (z)depends only upon the variable r.

Proof If z=tanh(r) e i θ , then z= rot θ a r O and k 1 = a r rot θ . Similarly z = rot θ a r O. We can write thanks to the previous Proposition 5.2.2:

Φ λ 1 B h ( 0 , ω ) ( z ) = B h ( 0 , ω ) Φ λ ( k 1 z ) dm ( z ) = 1 2 0 ω 0 2 π Φ λ ( a r rot θ θ a r O ) sinh ( 2 r ) d r d θ = 1 2 0 ω 0 2 π Φ λ ( a r rot ψ a r O ) sinh ( 2 r ) d r d ψ = B h ( 0 , ω ) Φ λ ( a r z ) dm ( z ) ,

which, as announced, is only a function of r. □

We now give an explicit formula for the integral B h ( 0 , ω ) Φ λ ( a r z )dm( z ).

Proposition 5.2.4 For allz=tanh(r) e i θ we have:

Φ λ 1 B h ( 0 , ω ) (z)= Φ λ ( a r O) B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm( z ).

Proof We first recall a formula from [22].

Lemma 5.2.1 For all gGthe following equation holds:

Φ λ ( g 1 z ) = D e ( i λ + 1 ) g O , b e ( i λ + 1 ) z , b db.

Proof See [22]. □

It follows immediately that for all zD and rR we have:

Φ λ ( a r z)= D e ( i λ + 1 ) a r O , b e ( i λ + 1 ) z , b db.

We integrate this formula over the hyperbolic ball B h (0,ω) which gives:

B h ( 0 , ω ) Φ λ ( a r z )dm( z )= B h ( 0 , ω ) ( D e ( i λ + 1 ) a r O , b e ( i λ + 1 ) z , b d b ) dm( z ),

and we exchange the order of integration:

We note that the integral B h ( 0 , ω ) e ( i λ + 1 ) z , b dm( z ) does not depend upon the variable b= e i ϕ . Indeed:

B h ( 0 , ω ) e ( i λ + 1 ) z , b dm ( z ) = 1 2 0 ω 0 2 π ( 1 tanh ( x ) 2 | tanh ( x ) e i θ e i ϕ | 2 ) i λ + 1 2 sinh ( 2 x ) d x d θ = 1 2 0 ω 0 2 π ( 1 tanh ( x ) 2 | tanh ( x ) e i ( θ ϕ ) 1 | 2 ) i λ + 1 2 sinh ( 2 x ) d x d θ = 1 2 0 ω 0 2 π ( 1 tanh ( x ) 2 | tanh ( x ) e i θ 1 | 2 ) i λ + 1 2 sinh ( 2 x ) d x d θ ,

and indeed the integral does not depend upon the variable b:

B h ( 0 , ω ) e ( i λ + 1 ) z , b dm( z )= B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm( z ).

Finally, we can write:

B h ( 0 , ω ) Φ λ ( a r z ) dm ( z ) = D e ( i λ + 1 ) a r O , b d b B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm ( z ) = Φ λ ( a r O ) B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm ( z ) = Φ λ ( a r O ) B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm ( z ) ,

because Φ λ = Φ λ (as solutions of the same equation).

This completes the proof that:

Φ λ ( g 1 z ) = B h ( 0 , ω ) Φ λ ( a r z )dm( z )= Φ λ ( a r O) B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm( z ).

 □

5.2.3 Third step

We state a useful formula.

Lemma 5.2.2 For allω>0the following formula holds:

B h ( 0 , ω ) Φ λ (z)dm(z)=πsinh ( ω ) 2 cosh ( ω ) 2 Φ λ ( 1 , 1 ) (ω).

Proof See Lemma F.0.4 of Appendix F. □

5.2.4 The main result

At this point we have proved the following proposition thanks to Propositions 5.2.1 and 5.2.4.

Proposition 5.2.5 Ifz=tanh(r) e i θ B h (0,ω), M(r,ω)is given by the following formula:

M ( r , ω ) = 1 4 π R W ˜ ( λ ) Φ λ ( a r O ) Ψ λ ( ω ) λ tanh ( π 2 λ ) d λ = 1 4 π R W ˜ ( λ ) Φ λ ( 0 , 0 ) ( r ) Ψ λ ( ω ) λ tanh ( π 2 λ ) d λ ,

where

Ψ λ (ω) = def B h ( 0 , ω ) e ( i λ + 1 ) z , 1 dm( z ).

We are now in a position to obtain the analytic form for M(r,ω) of Theorem 5.1.1. We prove that

Ψ λ (ω)= B h ( 0 , ω ) Φ λ (z)dm(z).

Indeed, in hyperbolic polar coordinates, we have:

Ψ λ ( ω ) = 1 2 0 ω 0 2 π e ( i λ + 1 ) r o t θ a r O , 1 sinh ( 2 r ) d r d θ = 1 2 0 ω 0 2 π e ( i λ + 1 ) a r O , e i θ sinh ( 2 r ) d r d θ = π 0 ω D e ( i λ + 1 ) a r O , b d b sinh ( 2 r ) d r = π 0 ω Φ λ ( a r O ) sinh ( 2 r ) d r .

On the other hand:

B h ( 0 , ω ) Φ λ ( z ) dm ( z ) = 1 2 0 ω 0 2 π Φ λ ( a r O ) sinh ( 2 r ) d r d θ = π 0 ω Φ λ ( a r O ) sinh ( 2 r ) d r .

This yields

Ψ λ (ω)= B h ( 0 , ω ) Φ λ (z)dm(z)=πsinh ( ω ) 2 cosh ( ω ) 2 Φ λ ( 1 , 1 ) (ω),

and we use Lemma 5.2.2 to establish (24).

5.3 Linear stability analysis

We now analyse the evolution of small time-dependent perturbations of the hyperbolic stationary-pulse solution through linear stability analysis. We use classical tools already developped in [29, 31].

5.3.1 Spectral analysis of the linearized operator

Equation (20) is linearized about the stationary solution V(r) by introducing the time-dependent perturbation:

v(z,t)=V(r)+ϕ(z,t).

This leads to the linear equation:

t ϕ(z,t)=αϕ(z,t)+ D W ( d 2 ( z , z ) ) H ( V ( r ) κ ) ϕ( z ,t)dm( z ).

We separate variables by setting ϕ(z,t)=ϕ(z) e β t to obtain the equation:

(β+α)ϕ(z)= D W ( d 2 ( z , z ) ) H ( V ( r ) κ ) ϕ( z )dm( z ).

Introducing the hyperbolic polar coordinates z=tanh(r) e i θ and using the result:

H ( V ( r ) κ ) =δ ( V ( r ) κ ) = δ ( r ω ) | V ( ω ) |

we obtain:

( β + α ) ϕ ( z ) = 1 2 0 ω 0 2 π W ( d 2 ( tanh ( r ) e i θ , tanh ( r ) e i θ ) ) δ ( r ω ) | V ( ω ) | 1 2 0 ω 0 2 π × ϕ ( tanh ( r ) e i θ ) sinh ( 2 r ) d r d θ = sinh ( 2 ω ) 2 | V ( ω ) | 0 2 π W ( d 2 ( tanh ( r ) e i θ , tanh ( r ) e i θ ) ) ϕ ( tanh ( ω ) e i θ ) d θ .

Note that we have formally differentiated the Heaviside function, which is permissible since it arises inside a convolution. One could also develop the linear stability analysis by considering perturbations of the threshold crossing points along the lines of Amari [20]. Since we are linearizing about a stationary rather than a traveling pulse, we can analyze the spectrum of the linear operator without the recourse to Evans functions.

With a slight abuse of notation we are led to study the solutions of the integral equation:

(β+α)ϕ(r,θ)= sinh ( 2 ω ) 2 | V ( ω ) | 0 2 π W(r,ω; θ θ)ϕ(ω, θ )d θ ,
(28)

where the following equality derives from the definition of the hyperbolic distance in equation (25):

W(r,ω;φ) = def W tanh 1 ( tanh ( r ) 2 + tanh ( ω ) 2 2 tanh ( r ) tanh ( ω ) cos ( φ ) 1 + tanh ( r ) 2 tanh ( ω ) 2 2 tanh ( r ) tanh ( ω ) cos ( φ ) ) .

Essential spectrum If the function ϕ satisfies the condition

0 2 π W(r,ω; θ )ϕ(ω,θ θ )d θ =0r,

then equation (28) reduces to:

β+α=0

yielding the eigenvalue:

β=α<0.

This part of the essential spectrum is negative and does not cause instability.

Discrete spectrum If we are not in the previous case we have to study the solutions of the integral equation (28).

This equation shows that ϕ(r,θ) is completely determined by its values ϕ(ω,θ) on the circle of equation r=ω. Hence, we need only to consider r=ω, yielding the integral equation:

(β+α)ϕ(ω,θ)= sinh ( 2 ω ) 2 | V ( ω ) | 0 2 π W(ω,ω; θ )ϕ(ω,θ θ )d θ .

The solutions of this equation are exponential functions e γ θ , where γ satisfies:

(β+α)= sinh ( 2 ω ) 2 | V ( ω ) | 0 2 π W(ω,ω; θ ) e γ θ d θ .

By the requirement that ϕ is 2π-periodic in θ, it follows that γ=in, where nZ. Thus the integral operator with kernel W has a discrete spectrum given by:

( β n + α ) = sinh ( 2 ω ) 2 | V ( ω ) | 0 2 π W ( ω , ω ; θ ) e i n θ d θ = sinh ( 2 ω ) 2 | V ( ω ) | × 0 2 π W tanh 1 ( 2 tanh ( ω ) 2 ( 1 cos ( θ ) ) 1 + tanh ( ω ) 4 2 tanh ( ω ) 2 cos ( θ ) ) e i n θ d θ = sinh ( 2 ω ) | V ( ω ) | × 0 π W tanh 1 ( 2 tanh ( ω ) sin ( θ ) ( 1 tanh ( ω ) 2 ) 2 + 4 tanh ( ω ) 2 sin ( θ ) 2 ) e i 2 n θ d θ .
β n

is real since:

( β n ) = sinh ( 2 ω ) | V ( ω ) | × 0 π W tanh 1 ( 2 tanh ( ω ) sin ( θ ) ( 1 tanh ( ω ) 2 ) 2 + 4 tanh ( ω ) 2 sin ( θ ) 2 ) sin ( 2 n θ ) d θ = 0 .

Hence,

β n = ( β n ) = α + sinh ( 2 ω ) | V ( ω ) | × 0 π W tanh 1 ( 2 tanh ( ω ) sin ( θ ) ( 1 tanh ( ω ) 2 ) 2 + 4 tanh ( ω ) 2 sin ( θ ) 2 ) cos ( 2 n θ ) d θ .

We can state the folliwing proposition:

Proposition 5.3.1 Provided that for alln0, β n <0then the hyperbolic stationary pulse is stable.

We now derive a reduced condition linking the parameters for the stability of hyperbolic stationary pulse.

Reduced condition Since W tanh 1 (r) is a positive function of r, it follows that:

β n β 0 .

Stability of the hyperbolic stationary pulse requires that for all n0, β n <0. This can be rewritten as:

sinh ( 2 ω ) | V ( ω ) | 0 π W tanh 1 ( 2 tanh ( ω ) sin ( θ ) ( 1 tanh ( ω ) 2 ) 2 + 4 tanh ( ω ) 2 sin ( θ ) 2 ) sinh ( 2 ω ) | V ( ω ) | 0 π × cos ( 2 n θ ) d θ < α , n 0 .

Using the fact that β n β 0 for all n1, we obtain the reduced stability condition:

W 0 ( ω ) | V ( ω ) | <α,

where

W 0 (ω) = def sinh(2ω) 0 π W tanh 1 ( 2 tanh ( ω ) sin ( θ ) ( 1 tanh ( ω ) 2 ) 2 + 4 tanh ( ω ) 2 sin ( θ ) 2 ) d θ .

From (22) we have:

V (ω)= 1 α ( M r ( ω ) + I ( ω ) ) ,

where

M r ( ω ) = def M r ( ω , ω ) = 1 64 sinh ( 2 ω ) 3 R W ˜ ( λ ) ( 1 + λ 2 ) Φ λ ( 1 , 1 ) ( ω ) Φ λ ( 1 , 1 ) ( ω ) λ tanh ( π 2 λ ) d λ .

We have previously established that M r (ω)>0 and I (ω) is negative by definition. Hence, letting D(ω)=| I (ω)|, we have

| V ( ω ) | = 1 α ( M r ( ω ) + D ( ω ) ) .

By substitution we obtain another form of the reduced stability condition:

D(ω)> W 0 (ω) M r (ω).
(29)

We also have:

M (ω)= d d ω M(ω,ω)= M r (ω,ω)+ M ω (ω,ω)= W 0 (ω) M r (ω),

and

N (ω)= M (ω)+ I (ω)= W 0 (ω) M r (ω)D(ω),

showing that the stability condition (29) is satisfied when N (ω)<0 and is not satisfied when N (ω)>0.

Proposition 5.3.2 (Reduced condition)

If N (ω)<0then for alln0, β n <0and the hyperbolic stationary pulse is stable.

6 Numerical results

The aim of this section is to numerically solve (13) for different values of the parameters. This implies developing a numerical scheme that approaches the solution of our equation, and proving that this scheme effectively converges to the solution.

Since equation (13) is defined on D, computing the solutions on the whole hyperbolic disk has the same complexity as computing the solutions of usual Euclidean neural field equations defined on R 2 . As most authors in the Euclidean case [26, 27, 29, 31], we reduce the domain of integration to a compact region of the hyperbolic disk. Practically, we work in the Euclidean ball of radius a=0.5 and center 0. Note that a Euclidean ball centered at the origin is also a centered hyperbolic ball, their radii being different.

We have divided this section into four parts. The first part is dedicated to the study of the discretization scheme of equation (13). In the following two parts, we study the solutions for different connectivity functions: an exponential function, Section 6.2, and a difference of Gaussians, Section 6.3.

6.1 Numerical schemes

Let us consider the modified equation of (13):

{ t V ( z , t ) = α V ( z , t ) t V ( z , t ) = + B ( 0 , a ) W ( z , z ) S ( V ( z , t ) ) dm ( z ) + I ( z , t ) , t J , V ( z , 0 ) = V 0 ( z ) .
(30)

We assume that the connectivity function satisfies the conditions (C1)-(C2). Moreover we express z in (Euclidean) polar coordinates such that z=r e i θ , V(z,t)=V(r,θ,t) and W(z, z )=W(r,θ, r , θ ). The integral in equation (30) is then:

B ( 0 , a ) W ( z , z ) S ( V ( z , t ) ) dm ( z ) = 0 a 0 2 π W ( r , θ , r , θ ) S ( V ( r , θ , t ) ) r d r d θ ( 1 r 2 ) 2 .

We define R to be the rectangle R = def [0,a]×[0,2π].

6.1.1 Discretization scheme

We discretize R in order to turn (30) into a finite number of equations. For this purpose we introduce h 1 = a N and h 2 = 2 π M M N

i [ [ 1 , N + 1 ] ] r i = ( i 1 ) h 1 , j [ [ 1 , M + 1 ] ] θ j = ( j 1 ) h 2 ,

and obtain the (N+1)(M+1) equations:

d V d t ( r i , θ j , t ) = α V ( r i , θ j , t ) + R W ( r i , θ j , r , θ ) S ( V ( r , θ , t ) ) r d r d θ ( 1 r 2 ) 2 + I ( r i , θ j , t )

which define the discretization of (30):

{ d V ˜ d t ( t ) = α V ˜ ( t ) + W S ( V ˜ ) ( t ) + I ˜ ( t ) , t J , V ˜ ( 0 ) = V ˜ 0 ,
(31)

where V ˜ (t) M N + 1 , M + 1 (R) V ˜ ( t ) i , j =V( r i , θ j ,t). Similar definitions apply to I ˜ and V ˜ 0 . Moreover:

WS( V ˜ ) ( t ) i , j = R W( r i , θ j , r , θ )S ( V ( r , θ , t ) ) r d r d θ ( 1 r 2 ) 2 .
M n , p (R)

is the space of the matrices of size n×p with real coefficients. It remains to discretize the integral term. For this as in [33], we use the rectangular rule for the quadrature so that for all (r,θ)R we have:

0 a 0 2 π W ( r , θ , r , θ ) S ( V ( r , θ , t ) ) r d r d θ ( 1 r 2 ) 2 h 1 h 2 k = 1 N + 1 l = 1 M + 1 W ( r , θ , r k , θ l ) S ( V ( r k , θ l , t ) ) r k ( 1 r k 2 ) 2 .

We end up with the following numerical scheme, where V i , j (t) (resp. I i , j (t)) is an approximation of V ˜ i , j (t) (resp. I ˜ i , j ), (i,j)[[1,N+1]]×[[1,M+1]]:

d V i , j d t (t)=α V i , j (t)+ h 1 h 2 k = 1 N + 1 l = 1 M + 1 W ˜ k , l i , j S( V k , l )(t)+ I i , j (t)

with W ˜ k , l i , j = def W( r i , θ j , r k , θ l ) r k ( 1 r k 2 ) 2 .

6.1.2 Discussion

We discuss the error induced by the rectangular rule for the quadrature. Let f be a function which is C 2 on a rectangular domain [a,b]×[c,d]. If we denote by E f this error, then | E f | ( b a ) 2 ( d c ) 2 4 m n f C 2 where m and n are the number of subintervals used and f C 2 = | α | 2 sup [ a , b ] × [ c , d ] | α f| where, as usual, α is a multi-index. As a consequence, if we want to control the error, we have to impose that the solution is, at least, C 2 in space.

Four our numerical experiments we use the specific function ode45 of Matlab which is based on an explicit Runge-Kutta formula (see [34] for more details on Runge-Kutta methods).

We can also establish a proof of the convergence of the numerical scheme which is exactly the same as in [33] excepted that we use the theorem of continuous dependence of the solution for ordinary differential equations.

6.2 Purely excitatory exponential connectivity function

In this subsection, we give some numerical solutions of (13) in the case where the connectivity function is an exponential function, w(x)= e | x | b , with b a positive parameter. Only excitation is present in this case. In all the experiments we set α=0.1 and S(x)= 1 1 + e μ x with μ=10.

6.2.1 Constant input

We fix the external input I(z) to be of the form:

I(z)=I e d 2 ( z , 0 ) 2 σ 2 .

In all experiments we set I=0.1 and σ=0.05, this means that the input has a sharp profile centered at 0.

We show in Figure 2 plots of the solution at time T=2,500 for three different values of the width b of the exponential function. When b=1, the whole network is highly excited, whereas as b changes from 1 to 0.1 the amplitude of the solution decreases, and the area of high excitation becomes concentrated around the external input.

Fig. 2
figure 2

Plots of the solution of equation (13) at T=2,500 for the values μ=10, α=0.1 and for decreasing values of the width b of the connectivity, see text.

6.2.2 Variable input

In this paragraph, we allow the external current to depend upon the time variable. We have:

I(z,t)=I e d 2 ( z , z 0 ( t ) ) 2 σ 2 ,

where z 0 (t)= r 0 e i Ω 0 t . This is a bump rotating with angular velocity Ω0 around the circle of radius r 0 centered at the origin. In our numerical experiments we set r 0 =0.4, Ω 0 =0.01, I=0.1 and σ=0.05. We plot in Figure 3 the solution at different times T=100,150,200,250.

Fig. 3
figure 3

Plots of the solution of equation (13) in the case of an exponential connectivity function with b=0.1 at different times with a time-dependent input, see text.

6.2.3 High gain limit

We consider the high gain limit μ of the sigmoid function and we propose to illustrate Section 5 with a numerical simulation. We set α=1, κ=0.04, ω=0.18. We fix the input to be of the form:

I(z)=I e d 2 ( z , 0 ) 2 σ 2

with I=0.04 and σ=0.05. Then the condition of existence of a stationary pulse (23) is satisfied, see Figure 1. We plot a bump solution according to (23) in Figure 4.

Fig. 4
figure 4

Plot of a bump solution of equation (22) for the values α=1, κ=0.04, ω=0.18 and for b=0.2 for the width of the connectivity, see text.

6.3 Excitatory and inhibitory connectivity function

We give some numerical solutions of (13) in the case where the connectivity function is a difference of Gaussians, which features an excitatory center and an inhibitory surround:

w(x)= 1 2 π σ 1 2 e x 2 σ 1 2 A 2 π σ 2 2 e x 2 σ 2 2 with σ 1 =0.1, σ 2 =0.2andA=1.

We illustrate the behaviour of the solutions when increasing the slope μ of the sigmoid. We set the sigmoid S(x)= 1 1 + e μ x 1 2 so that it is equal to 0 at the origin and we choose the external input equal to zero, I(z,t)=0. In this case the constant function equal to 0 is a solution of (13).

For small values of the slope μ, the dynamics of the solution is trivial: every solution asymptotically converges to the null solution, as shown in top left hand corner of Figure 5 with μ=1. When increasing μ, the stability bound, found in Section 4.5 is no longer satisfied and the null solution may no longer be stable. In effect this solution may bifurcate to other, more interesting solutions. We plot in Figure 5, some solutions at T=2,500 for different values of μ (μ=3,5,10,20and30). We can see exotic patterns which feature some interesting symmetries. The formal study of these bifurcated solutions is left for future work.

Fig. 5
figure 5

Plots of the solutions of equation (13) in the case where the connectivity function is the difference of two Gaussians at time T=2,500 for α=0.1 and for increasing values of the slope μ of the sigmoid, see text.

7 Conclusion

In this paper, we have studied the existence and uniqueness of a solution of the evolution equation for a smooth neural mass model called the structure tensor model. This model is an approach to the representation and processing of textures and edges in the visual area V1 which contains as a special case the well-known ring model of orientations (see [1, 2, 19]). We have also given a rigorous functional framework for the study and computation of the stationary solutions to this nonlinear integro-differential equation. This work sets the basis for further studies beyond the spatially periodic case studied in [15], where the hypothesis of spatial periodicity allows one to replace the unbounded (hyperbolic) domain by a compact one, hence making the functional analysis much simpler.

We have completed our study by constructing and analyzing spatially localised bumps in the high-gain limit of the sigmoid function. It is true that networks with Heaviside nonlinearities are not very realistic from the neurobiological perspective and lead to difficult mathematical considerations. However, taking the high-gain limit is instructive since it allows the explicit construction of stationary solutions which is impossible with sigmoidal nonlinearities. We have constructed what we called a hyperbolic radially symmetric stationary-pulse and presented a linear stability analysis adapted from [31]. The study of stationary solutions is very important as it conveys information for models of V1 that is likely to be biologically relevant. Moreover our study has to be thought of as the analog in the case of the structure tensor model to the analysis of tuning curves of the ring model of orientations (see [1, 2, 19, 35]). However, these solutions may be destabilized by adding lateral spatial connections in a spatially organized network of structure tensor models; this remains an area of future investigation. As far as we know, only Bressloff and coworkers looked at this problem (see [3, 4, 1114]).

Finally, we illustrated our theoretical results with numerical simulations based on rigorously defined numerical schemes. We hope that our numerical experiments will lead to new and exciting investigations such as a thorough study of the bifurcations of the solutions of our equations with respect to such parameters as the slope of the sigmoid and the width of the connectivity function.

Appendix A: Isometries of D

We briefly descrbies the isometries of D, that is, the transformations that preserve the distance d 2 . We refer to the classical textbooks in hyperbolic goemetry for details, for example, [17]. The direct isometries (preserving the orientation) in D are the elements of the special unitary group, noted SU(1,1), of 2×2 Hermitian matrices with determinant equal to 1. Given:

γ= ( α β β ¯ α ¯ ) such that | α | 2 | β | 2 =1,

an element of SU(1,1), the corresponding isometry γ in D is defined by:

γz= α z + β β ¯ z + α ¯ ,zD.
(32)

Orientation reversing isometries of D are obtained by composing any transformation (32) with the reflection κ:z z ¯ . The full symmetry group of the Poincaré disc is therefore:

U(1,1)=SU(1,1)κSU(1,1).

Let us now describe the different kinds of direct isometries acting in D. We first define the following one parameter subgroups of SU(1,1):

{ K = def { rot ϕ = ( e i ϕ 2 0 0 e i ϕ 2 ) , ϕ S 1 } , A = def { a r = ( cosh r sinh r sinh r cosh r ) , r R } , N = def { n s = ( 1 + i s i s i s 1 i s ) , s R } .

Note that rot ϕ z= e i ϕ z and also a r O=tanhr, with O being the center of the Poincaré disk that is the point represented by z=0.

The group K is the orthogonal group O(2). Its orbits are concentric circles. It is possible to express each point zD in hyperbolic polar coordinates: z= rot ϕ a r O=tanhr e i ϕ and r= d 2 (z,0).

The orbits of A converge to the same limit points of the unit circle D, b ± 1 =±1 when r±. They are circular arcs in D going through the points b 1 and b 1 .

The orbits of N are the circles inside D and tangent to the unit circle at b 1 . These circles are called horocycles with base point b 1 . N is called the horocyclic group. It is also possible to express each point zD in horocyclic coordinates: z= n s a r O, where n s are the transformations associated with the group N (sR) and a r the transformations associated with the subroup A (rR).

A.1 Iwasawa decomposition

The following decomposition holds, see [36]:

SU(1,1)=KAN.

This theorem allows us to decompose any isometry of D as the product of at most three elements in the groups, K A and N.

Appendix B: Volume element in structure tensor space

Let T be a structure tensor

T= [ x 1 x 3 x 3 x 2 ] ,

Δ2 its determinant, Δ0. T can be written

T=Δ T ˜ ,

where T ˜ has determinant 1. Let z= z 1 +i z 2 be the complex number representation of T ˜ in the Poincaré disk D. In this part of the appendix, we present a simple form for the volume element in full structure tensor space, when parametrized as (Δ,z).

Proposition B.0.1 The volume element in (Δ, z 1 , z 2 ) coordinates is

dV=8 2 d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .
(33)

Proof In order to compute the volume element in (Δ, z 1 , z 2 ) space, we need to express the metric g T in these coordinates. This is obtained from the inner product in the tangent space T T at point T of SDP(2). The tangent space is the set S(2) of symmetric matrices and the inner product is defined by:

g T (A,B)=tr( T 1 A T 1 B),A,BS(2).

We note that g T (A,B)= g T ˜ (A,B)/ Δ 2 . We write g instead of g T ˜ . A basis of T T (or T T ˜ for that matter) is given by:

x 1 = [ 1 0 0 0 ] , x 2 = [ 0 0 0 1 ] , x 3 = [ 0 1 1 0 ] ,

and the metric is given by:

g i j = g T ˜ ( x i , x j ) ,i,j=1,2,3.

The determinant G T of g T is equal to G/ Δ 6 , where G is the determinant of g= g T ˜ . G is found to be equal to 2. The volume element is thus:

dV= 2 Δ 3 d x 1 d x 2 d x 3 .

We then use the relations:

x 1 =Δ x ˜ 1 , x 2 =Δ x ˜ 2 , x 3 =Δ x ˜ 3 ,

where x ˜ i , i=1,2,3, is given by:

{ x ˜ 1 = ( 1 + z 1 ) 2 + z 2 2 1 z 1 2 z 2 2 , x ˜ 2 = ( 1 z 1 ) 2 + z 2 2 1 z 1 2 z 2 2 , x ˜ 3 = 2 z 2 1 z 1 2 z 2 2 .

The determinant of the Jacobian of the transformation ( x 1 , x 2 , x 3 )(Δ, z 1 , z 2 ) is found to be equal to:

8 Δ 2 ( 1 | z | 2 ) 2 .

Hence, the volume element in (Δ, z 1 , z 2 ) coordinates is

dV=8 2 d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .

 □

Appendix C: Global existence of solutions

Theorem C.0.1 LetObe an open connected set of a real Banach spaceFand J be an open interval ofR. We consider the initial value problem:

{ V ( t ) = f ( t , V ( t ) ) , V ( t 0 ) = V 0 .
(34)

We suppose thatfC(J×O,F)and is locally Lipschitz with respect to its second argument. Then for all( t 0 , V 0 )J×O, there existsτ>0andV C 1 (] t τ , t 0 +τ[,O)unique solution of (34).

Lemma C.0.1 Under hypotheses of Theorem C.0.1, if V 1 C 1 ( J 1 ,O)and V 2 C 1 ( J 2 ,O)are two solutions and if there exists t 0 J 1 J 2 such that V 1 ( t 0 )= V 2 ( t 0 )then:

V 1 (t)= V 2 (t)for allt J 1 J 2 .

This lemma shows the existence of a larger interval J 0 on which the initial value problem (34) has a unique solution. This solution is called the maximal solution.

Theorem C.0.2 Under hypotheses of Theorem C.0.1, letV C 1 ( J 0 ,O)be a maximal solution. We denote by b the upper bound of J and β the upper bound of J 0 . Then eitherβ=bor for all compact setKO, there existsη<βsuch that:

V(t)O|K,for alltηwitht J 0

We have the same result with the lower bounds.

Theorem C.0.3 We supposefC(J×F,F)and is globally Lipschitz with respect to its second argument. Then for all( t 0 , V 0 )J×F, there exists a uniqueV C 1 (J,F)solution of (34).

Appendix D: Proof of Lemma 3.1.1

Lemma D.0.1 When W is only a function of d 0 (T, T ), then W ¯ does not depend upon the variableT.

Proof We work in (z,Δ) coordinates and we begin by rewriting the double integral (6) for all (z,Δ) R + ×D:

W ¯ (z,Δ,t)= 0 + D W ( 2 ( log Δ log Δ ) 2 + d 2 2 ( z , z ) , t ) d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .

The change of variable Δ Δ Δ yields:

W ¯ (z,Δ,t)= 0 + D W ( 2 ( log Δ ) 2 + d 2 2 ( z , z ) , t ) d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 .

And it establishes that W ¯ does not depend upon the variable Δ. To finish the proof, we show that the following integral does not depend upon the variable zD:

Ξ(z)= D f ( d 2 ( z , z ) ) d z 1 d z 2 ( 1 | z | 2 ) 2 ,
(35)

where f is a real-valued function such that Ξ(z) is well defined.

We express z in horocyclic coordinates: z= n s a r O (see Appendix A) and (35) becomes:

Ξ ( z ) = R R f ( d 2 ( n s a r O , n s a r O ) ) e 2 r d s d r = R R f ( d 2 ( n s s a r O , a r O ) ) e 2 r d s d r .

With the change of variable s s =x e 2 r , this becomes:

Ξ(z)= R R f ( d 2 ( n x e 2 r a r O , a r O ) ) e 2 ( r r ) dxd r .

The relation n x e 2 r a r O= a r n x O (proved, for example, in [22]) yields:

Ξ ( z ) = R R f ( d 2 ( a r n x O , a r O ) ) e 2 ( r r ) d x d r = R R f ( d 2 ( O , n x a r r O ) ) e 2 ( r r ) d x d r = R R f ( d 2 ( O , n x a y O ) ) e 2 y d x d y = D f ( d 2 ( O , z ) ) dm ( z )

with z = z 1 +i z 2 and dm( z )= d z 1 d z 2 ( 1 | z | 2 ) 2 , which shows that Ξ(z) does not depend upon the variable z, as announced. □

Appendix E: Proof of Lemma 3.1.2

In this section we prove the following lemma.

Lemma E.0.1 When W is the following Mexican hat function:

W(z,Δ, z Δ )=w ( 2 ( log Δ log Δ ) 2 + d 2 2 ( z , z ) ) ,

where:

w(x)= 1 2 π σ 1 2 e x 2 σ 1 2 A 2 π σ 2 2 e x 2 σ 2 2

with0 σ 1 σ 2 and0A1.

Then:

W ¯ = π 3 2 2 ( σ 1 e 2 σ 1 2 erf ( 2 σ 1 ) A σ 2 e 2 σ 2 2 erf ( 2 σ 2 ) ) ,

where erf is the error function defined as:

erf(x)= 2 π 0 x e u 2 du.

Proof We consider the following double integrals:

Ξ i = 0 + D 1 2 π σ i 2 e ( log Δ log Δ ) 2 σ i 2 e d 2 2 ( z , z ) 2 σ i 2 d Δ Δ d z 1 d z 2 ( 1 | z | 2 ) 2 ,i=1,2,
(36)

so that:

W ¯ = Ξ 1 A Ξ 2 .

Since the variables are separable, we have:

Ξ i = ( 0 + 1 2 π σ i 2 e ( log Δ log Δ ) 2 σ i 2 d Δ Δ ) ( D e d 2 2 ( z , z ) 2 σ i 2 d z 1 d z 2 ( 1 | z | 2 ) 2 ) .

One can easily see that:

0 + 1 2 π σ i 2 e ( log Δ log Δ ) 2 σ i 2 d Δ Δ = 1 2 .

We now give a simplified expression for Ξ i . We set f i (x)= e x 2 / ( 2 σ i 2 ) and then we have, because of Lemma 3.1.1:

Ξ i = 1 2 D f i ( d 2 ( O , z ) ) dm ( z ) = 1 2 D f i ( arctanh ( | z | ) ) dm ( z ) = 1 2 0 1 0 2 π f i ( arctanh ( r ) ) r d r d θ ( 1 r 2 ) 2 = 2 π 0 1 f i ( arctanh ( r ) ) r d r ( 1 r 2 ) 2 = 2 π 0 1 e arctanh 2 ( r ) 2 σ i 2 r d r ( 1 r 2 ) 2 .

The change of variable x=arctanh(r) implies dx= d r 1 r 2 and yields:

Ξ i = 2 π 0 + e x 2 2 σ i 2 tanh ( x ) 1 tanh 2 ( x ) d x = 2 π 0 + e x 2 2 σ i 2 sinh ( x ) cosh ( x ) d x = π 2 0 + e x 2 2 σ i 2 sinh ( 2 x ) d x = π 2 2 ( 0 + e x 2 2 σ i 2 + 2 x d x 0 + e x 2 2 σ i 2 2 x d x ) = π 2 2 e 2 σ i 2 ( 0 + e ( x 2 σ i 2 ) 2 2 σ i 2 d x 0 + e ( x + 2 σ i 2 ) 2 2 σ i 2 d x ) = π 2 σ i e 2 σ i 2 ( 2 σ i + e u 2 d u 2 σ i + e u 2 d u ) = π 2 σ i e 2 σ i 2 2 σ i 2 σ i e u 2 d u

then we have a simplified expression for Ξ i :

Ξ i = π 3 2 2 σ i e 2 σ i 2 erf ( 2 σ i ) .

 □

Appendix F: Proof of Lemma 5.2.2

Lemma F.0.1 For allω>0the following formula holds:

B h ( 0 , ω ) Φ λ (z)dm(z)=πsinh ( ω ) 2 cosh ( ω ) 2 Φ λ ( 1 , 1 ) (ω).

Proof We write z in hyperbolic polar coordinates, z=tanh(r) e i θ (see Appendix A). We have:

B h ( 0 , ω ) Φ λ (z)dm(z)= 1 2 0 ω 0 2 π Φ λ ( tanh ( r ) e i θ ) sinh(2r)drdθ.

Because of the above definition of Φ λ , this reduces to

π 0 ω Φ λ ( tanh ( r ) ) sinh(2r)dr.

In [22] Helgason proved that:

Φ λ ( tanh ( r ) ) =F ( ν , 1 ν ; 1 ; sinh ( r ) 2 )

with ν= 1 2 (1+iλ). We then use the formula obtained by Erdelyi in [32]:

F(ν,1ν;1;z)= d d z ( z F ( ν , 1 ν ; 2 ; z ) ) .

Using some simple hyperbolic trigonometry formulae we obtain:

sinh(2r)F ( ν , 1 ν ; 1 ; sinh ( r ) 2 ) = d d r ( sinh ( r ) 2 F ( ν , 1 ν ; 2 ; sinh ( r ) 2 ) ) ,

from which we deduce

B h ( 0 , ω ) Φ λ (z)dm(z)=πsinh ( ω ) 2 F ( ν , 1 ν ; 2 ; sinh ( ω ) 2 ) .

Finally we use the equality shown in [32]:

F(a,b;c;z)= ( 1 z ) c a b F(ca,cb;c;z).

In our case we have: a=νb=1νc=2 and z=sinh ( ω ) 2 , so 2ν= 1 2 (3iλ)1+ν= 1 2 (3+iλ). We obtain

B h ( 0 , ω ) Φ λ ( z ) dm ( z ) = π sinh ( ω ) 2 cosh ( ω ) 2 F ( 1 2 ( 3 i λ ) , 1 2 ( 3 + i λ ) ; 2 ; sinh ( ω ) 2 ) .

Since Hypergeometric functions are symmetric with respect to the first two variables:

F(a,b;c;z)=F(b,a;c;z),

we write

F ( 1 2 ( 3 i λ ) , 1 2 ( 3 + i λ ) ; 2 ; sinh ( ω ) 2 ) = F ( 1 2 ( 3 + i λ ) , 1 2 ( 3 i λ ) ; 2 ; sinh ( ω ) 2 ) = Φ λ ( 1 , 1 ) ( ω ) ,

which yields the announced formula

B h ( 0 , ω ) Φ λ (z)dm(z)=πsinh ( ω ) 2 cosh ( ω ) 2 Φ λ ( 1 , 1 ) (ω).

 □

References

  1. Ben-Yishai R, Bar-Or R, Sompolinsky H: Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA 1995,92(9):3844–3848.

    Article  Google Scholar 

  2. Hansel, D., Sompolinsky, H.: Modeling feature selectivity in local cortical circuits. In: Methods of Neuronal Modeling, pp. 499–567 (1997) Hansel, D., Sompolinsky, H.: Modeling feature selectivity in local cortical circuits. In: Methods of Neuronal Modeling, pp. 499–567 (1997)

  3. Bressloff P, Bressloff N, Cowan J: Dynamical mechanism for sharp orientation tuning in an integrate-and-fire model of a cortical hypercolumn. Neural Comput. 2000,12(11):2473–2511.

    Article  Google Scholar 

  4. Bressloff, P.C., Cowan, J.D.: A spherical model for orientation and spatial frequency tuning in a cortical hypercolumn. Philos. Trans. R. Soc. Lond. B, Biol. Sci. (2003) Bressloff, P.C., Cowan, J.D.: A spherical model for orientation and spatial frequency tuning in a cortical hypercolumn. Philos. Trans. R. Soc. Lond. B, Biol. Sci. (2003)

  5. Orban G, Kennedy H, Bullier J: Velocity sensitivity and direction selectivity of neurons in areas V1 and V2 of the monkey: influence of eccentricity. J. Neurophysiol. 1986, 56: 462–480.

    Google Scholar 

  6. Hubel D, Wiesel T: Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 1968,195(1):215.

    Article  Google Scholar 

  7. Chossat P, Faugeras O: Hyperbolic planforms in relation to visual edges and textures perception. PLoS Comput. Biol. 2009, 5: e1000625.

    Article  MathSciNet  Google Scholar 

  8. Bigun, J., Granlund, G.: Optimal orientation detection of linear symmetry. In: Proc. First Int’l Conf. Comput. Vision, pp. 433–438. EEE Computer Society Press (1987) Bigun, J., Granlund, G.: Optimal orientation detection of linear symmetry. In: Proc. First Int’l Conf. Comput. Vision, pp. 433–438. EEE Computer Society Press (1987)

  9. Knutsson, H.: Representing local structure using tensors. In: Scandinavian Conference on Image Analysis, pp. 244–251 (1989) Knutsson, H.: Representing local structure using tensors. In: Scandinavian Conference on Image Analysis, pp. 244–251 (1989)

  10. Wilson H, Cowan J: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 1972, 12: 1–24.

    Article  Google Scholar 

  11. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M: Geometric visual hallucinations, euclidean symmetry and the functional architecture of striate cortex. Philos. Trans. R. Soc. Lond. B, Biol. Sci. 2001, 306: 299–330.

    Article  Google Scholar 

  12. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M: What geometric visual hallucinations tell us about the visual cortex. Neural Comput. 2002,14(3):473–491.

    Article  MATH  Google Scholar 

  13. Bressloff P, Cowan J: The visual cortex as a crystal. Physica D 2002, 173: 226–258.

    Article  MATH  MathSciNet  Google Scholar 

  14. Bressloff, P., Cowan, J.: So(3) symmetry breaking mechanism for orientation and spatial frequency tuning in the visual cortex. Phys. Rev. Lett. 88 (2002) Bressloff, P., Cowan, J.: So(3) symmetry breaking mechanism for orientation and spatial frequency tuning in the visual cortex. Phys. Rev. Lett. 88 (2002)

  15. Chossat, P., Faye, G., Faugeras, O.: Bifurcation of hyperbolic planforms. J. Nonlinear Sci. (2010) Chossat, P., Faye, G., Faugeras, O.: Bifurcation of hyperbolic planforms. J. Nonlinear Sci. (2010)

  16. Bonnabel S, Sepulchre R: Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank. SIAM J. Matrix Anal. Appl. 2009, 31: 1055–1070.

    Article  MathSciNet  Google Scholar 

  17. Katok, S.: Fuchsian Groups. Chicago Lectures in Mathematics. The University of Chicago Press (1992) Katok, S.: Fuchsian Groups. Chicago Lectures in Mathematics. The University of Chicago Press (1992)

  18. Moakher M: A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. 2005, 26: 735–747.

    Article  MATH  MathSciNet  Google Scholar 

  19. Shriki O, Hansel D, Sompolinsky H: Rate models for conductance-based cortical neuronal networks. Neural Comput. 2003,15(8):1809–1841.

    Article  MATH  Google Scholar 

  20. Amari S-I: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27: 77–87.

    Article  MATH  MathSciNet  Google Scholar 

  21. Potthast R, Graben PB: Existence and properties of solutions for neural field equations. Math. Methods Appl. Sci. 2010, 33: 935–949.

    MATH  MathSciNet  Google Scholar 

  22. Helgason, S.: In: Groups and Geometric Analysis. Mathematical Surveys and Monographs, vol. 83. American Mathematical Society (2000) Helgason, S.: In: Groups and Geometric Analysis. Mathematical Surveys and Monographs, vol. 83. American Mathematical Society (2000)

  23. Faugeras O, Veltz R, Grimbert F: Persistent neural states: stationary localized activity patterns in nonlinear continuous n -population, q -dimensional neural networks. Neural Comput. 2009,21(1):147–187.

    Article  MATH  MathSciNet  Google Scholar 

  24. Camperi M, Wang X: A model of visuospatial working memory in prefrontal cortex: recurrent network and cellular bistability. J. Comput. Neurosci. 1998, 5: 383–405.

    Article  MATH  Google Scholar 

  25. Pinto D, Ermentrout G: Spatially structured activity in synaptically coupled neuronal networks: 1. traveling fronts and pulses. SIAM J. Appl. Math. 2001, 62: 206–225.

    Article  MATH  MathSciNet  Google Scholar 

  26. Laing C, Troy W, Gutkin B, Ermentrout G: Multiple bumps in a neuronal model of working memory. SIAM J. Appl. Math. 2002,63(1):62–97.

    Article  MATH  MathSciNet  Google Scholar 

  27. Laing CR, Troy WC: PDE methods for nonlocal models. SIAM J. Appl. Dyn. Syst. 2003,2(3):487–516.

    Article  MATH  MathSciNet  Google Scholar 

  28. Laing C, Troy W: Two-bump solutions of amari-type models of neuronal pattern formation. Physica D 2003, 178: 190–218.

    Article  MATH  MathSciNet  Google Scholar 

  29. Owen M, Laing C, Coombes S: Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J. Phys. 2007,9(10):378–401.

    Article  Google Scholar 

  30. Coombes S: Waves, bumps, and patterns in neural fields theories. Biol. Cybern. 2005,93(2):91–108.

    Article  MATH  MathSciNet  Google Scholar 

  31. Folias SE, Bressloff PC: Breathing pulses in an excitatory neural network. SIAM J. Appl. Dyn. Syst. 2004,3(3):378–407.

    Article  MATH  MathSciNet  Google Scholar 

  32. Erdelyi, In: Robert, E. (ed.) Higher Transcendental Functions, vol. 1. Krieger Publishing Company (1985) Erdelyi, In: Robert, E. (ed.) Higher Transcendental Functions, vol. 1. Krieger Publishing Company (1985)

  33. Faye, G., Faugeras, O.: Some theoretical and numerical results for delayed neural field equations. Physica D (2010). Special issue on Mathematical Neuroscience Faye, G., Faugeras, O.: Some theoretical and numerical results for delayed neural field equations. Physica D (2010). Special issue on Mathematical Neuroscience

  34. Bellen, A., Zennaro, M.: Numerical Methods for Delay Differential Equations. Oxford University Press (2003) Bellen, A., Zennaro, M.: Numerical Methods for Delay Differential Equations. Oxford University Press (2003)

  35. Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J. Appl. Dyn. Syst. 2010, 9: 954–998.

    Article  MATH  MathSciNet  Google Scholar 

  36. Iwaniec, H.: In: Spectral Methods of Automorphic Forms. AMS Graduate Series in Mathematics, vol. 53. AMS Bookstore (2002) Iwaniec, H.: In: Spectral Methods of Automorphic Forms. AMS Graduate Series in Mathematics, vol. 53. AMS Bookstore (2002)

Download references

Acknowledgements

This work was partially funded by the ERC advanced grant NerVi number 227747.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grégory Faye.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Faye, G., Chossat, P. & Faugeras, O. Analysis of a hyperbolic geometric model for visual texture perception. J. Math. Neurosc. 1, 4 (2011). https://doi.org/10.1186/2190-8567-1-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-1-4

Keywords