Skip to main content

On the potential role of lateral connectivity in retinal anticipation


We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can—under conditions analysed in the paper—trigger a wave of activity enhancing the anticipation mechanism provided by local gain control (Berry et al. in Nature 398(6725):334–338, 1999; Chen et al. in J. Neurosci. 33(1):120–132, 2013). We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells (Baccus and Meister in Neuron 36(5):909–919, 2002) and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity (Trenholm et al. in Nat. Neurosci. 16:154–156, 2013). We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.


Our visual system has to constantly handle moving objects. Static images do not exist for it, as the environment, our body, our head, our eyes are constantly moving. A “computational”, contemporary view, likens the retina to an “encoder”, converting the light photons coming from a visual scene into spike trains sent—via the axons of ganglion cells (GCells) that constitute the optic nerve—to the thalamus, and then to the visual cortex acting as a “decoder”. In this view, comparing the size and the number of neurons in the retina, i.e. about 1 million GCells (humans), to the size, structure and number of neurons in the visual cortex (around 538 million per hemisphere in the human visual cortex [22]), the “encoder” has to be quite smart to efficiently compress the visual information coming from a world made of moving objects. Although it has long been thought that the retina was not more than a simple camera, there is more and more evidence that this organ is “smarter than neuroscientists believed” [41]. It is indeed able to perform complex tasks and general motion feature extractions, such as approaching motion, differential motion and motion anticipation, allowing the visual cortex to process visual stimuli with more efficiency.

Computations occurring in neuronal networks downstream photoreceptors are crucial to make sense out of the “motion-agnostic” signal delivered by these retinal sensors [11]. There exists a wide range of theoretical and biological approaches to studying retinal processing of motion. Moving stimuli are generally considered as a spatiotemporal pattern of light intensity projected on the retina, from which it extracts relevant information, such as the direction of image motion. Detecting motion requires then neural networks to be able to process, in a nonlinear fashion, moving stimuli, asymmetrically in time. [6, 10] [37, 88] The first motion detector model was proposed by Hassenstein and Reichardt [42], based on the optomotor behaviour of insects. The model relies on changes in contrast of two spatially distinct locations inside the receptive field of a motion sensitive neuron. The neuron will only produce a response if contrast changes are temporally delayed, and is thus not only selective to direction, but also to motion velocity. Several models have then been developed based on Reichardt detectors, attempting to explain motion processing in vertebrate species as well, but they all share the common feature of integrating spatio-temporal variations of the contrast. To perceive motion as coherent and uninterrupted, an additional integration over motion detectors is hypothesised to take place. This integration usually takes the form of a pooling mechanism over visual space and time.

However, before performing these high-level motion detection computations, the photons received by the retina need first to be converted into electrical signals that will be transmitted to the visual cortex, a process known as phototransduction. This process takes about 30–100 milliseconds. Though this might look fast, it is actually too slow. A tennis ball moving at 30 m/s–108 km/h (the maximum measured speed is about 250 km/h) covers between 0.9 and 3 m during this time. So, without a mechanism compensating this delay, it would not be possible to play tennis (not to speak of survival, a necessary condition for a species to reach the level where playing tennis becomes possible). The visual system is indeed able to extrapolate the trajectory of a moving object to perceive it at its actual location. This corresponds to anticipation mechanisms taking place in the visual cortex and in the retina with different modalities [4, 56, 57, 84].

In the early visual cortex an object moving across the visual field triggers a wave of activity ahead of motion thanks to the cortical lateral connectivity [8, 46, 77]. Jancke et al. [46] first demonstrated the existence of anticipatory mechanisms in the cat primary visual cortex. They recorded cells in the central visual field of area 17 (corresponding to the primary visual cortex) of anaesthetised cats responding to small squares of light either flashed or moving in different directions and with different speeds. When presented with the moving stimulus, cells show a reduction of neural latencies, as compared to the flashed stimulus. Subramaniyan et al. [77] reported the existence of similar anticipatory effects in the macaque primary visual cortex, showing that a moving bar is processed faster than a flashed bar. They give two possible explanations to this phenomenon: either a shift in the cells receptive fields induced by motion, or a faster propagation of motion signals as compared to the flash signal.

In the retina, anticipation takes a different form. One observes a peak in the firing rate response of GCells to a moving object occurring before the peak response to the same object when flashed. This effect can be explained by purely local mechanisms at individual cells level [9, 20]. To our best knowledge, collective effects similar to the cortical ones, that is, a rise in the cell’s activity before the object enters in its receptive field due to a wave of activity ahead of the moving object, have not been reported yet.

In a classical Hubel–Wiezel–Barlow [5, 44, 60] view of vision, each retinal ganglion cell carries a flow of information with an efficient coding strategy maximising the available channel capacity by minimising the redundancy between GCells. From this point of view, the most efficient coding is provided when GCells are independent encoders (parallel streaming identified by a “I” in Fig. 1). In this setting one can propose a simple and satisfactory mechanism explaining anticipation in the retina based on gain control at the level of bipolar cells (BCells) and GCells (label “II” in Fig. 1) [9, 20].

Figure 1

Synthetic view of the retina model. A stimulus is perceived by the retina, triggering different pathways. Pathway I (blue) corresponds to a feed-forward response where, from top to bottom: The stimulus is first convolved with a spatio-temporal receptive field that mimics the outer plexiform layer (OPL) (“Bipolar receptive field response”). This response is rectified by low voltage threshold (blue squares). Bipolar cell responses are then pooled (blue circles with blue arrows) and input ganglion cells. The firing rate response of a ganglion cell is a sigmoidal function of the voltage (blue square). Gain control can be applied at the bipolar and ganglion cell level (pink circles) triggering anticipation. This corresponds to the label II (pink) in the figure. Lateral connectivity is featured by pathway III (brown) through ACells and pathway IV (green) through gap-junctions at the level of GCells

Yet, some GCells are connected. Either directly, by electric synapses–gap junctions (pathway IV in Fig. 1), or indirectly via specific amacrine cells (ACells, pathway III in Fig. 1). It is known that these pathways are involved in motion processing by the retina. AII ACells play a fundamental role in the interaction between the ON and OFF cone pathway [54]. There are GCells able to detect the differential motion of an object onto a moving background [1] thanks to ACell lateral connectivity. Some GCells are direction sensitive because they are connected via a specific asymmetric gap junctions connectivity [80]. Could lateral connectivity play a role in motion anticipation, inducing a wave of activity ahead of the motion similar to the cortical anticipation mechanism? While some studies hypothesise that local gain control mechanisms can be explained by the prevalence of inhibition in the retinal connectome [47], the mechanistic aspects of the role of lateral connectivity on motion anticipation has not, to the best of our knowledge, been addressed yet on either experimental or computational grounds.

In this paper, we address this question from a modeler, computational neuroscientist, point of view. We propose here a simplified description of pathways I, II, III, IV of Fig. 1, grounded on biology, but not sticking at it, to numerically study the potential effects of gain control combined with lateral connectivity—gap junctions or ACells—on motion anticipation. The goal here is not to be biologically realistic but, instead, to propose from biological observations potential mechanisms enhancing the retina’s capacity to anticipate motion and compensate the delay introduced by photo-transduction and feed-forward processing in the cortical response. We want the mechanisms to be as generic as possible, so that the detailed biological implementation is not essential. This has the advantage of making the model more prone to mathematical analysis.

The first contribution of our work lies in the development of a model of retinal anticipation where GCells have gain control, orientation selectivity and are laterally connected. It is based on a model introduced by Chen et al. in [20] (itself based on [9]) reproducing several motion processing features: anticipation, alert response to motion onset and motion reversal. The original model handles one-dimensional motions and its cells are not laterally connected (only pathways I and II were considered). The extension proposed here features cells with oriented receptive field, although our numerical simulations do not consider this case (see discussion). Lateral connectivity is based on biophysical modelling and existing literature [1, 28, 43, 78, 80]. In this framework, we study different types of motion. We start with a bar moving with constant speed and study the effect of contrast, bar size and speed on anticipation, generalising previous studies by Berry et al. [9] and Chen et al. [20]. We then extend the analysis to two-dimensional motions, investigating, e.g. angular motion and curved trajectories. Far from making an exhaustive study of anticipation in complex stimuli, the goal here is to calibrate anticipation without lateral connectivity so as to compare the effect when connectivity is switched on.

The second contribution emphasises a potential role of lateral connectivity (gap junctions and ACells) on anticipation. For this, we first make a general mathematical analysis concluding that lateral connectivity can induce a wave triggered by the stimulus which under specific conditions can improve anticipation. The effect depends on the connectivity graph and is nonlinearly tuned by gain control. In the case of gap junctions, the wave propagation depends on whether connectivity is symmetric (the standard case) or asymmetric, as proposed by Trenholm et al. in [80] for a specific type of direction sensitive GCells. In the case of ACells, the connectivity graph is involved in the spectrum of a propagation operator controlling the time evolution of the network response to a moving stimulus. We instantiate this general analysis by studying differential motion sensitive cells [1] with two types of connectivity: nearest neighbours, and a random connectivity, inspired from biology [78], where only numerical results are shown. In general, the anticipation effect depends on the connectivity graph structure and the intensity of coupling between cells as well as on the respective characteristic times of response of cells in a way that we analyse mathematically and illustrate numerically.

We actually observe two forms of anticipation. The first one, discussed in the beginning of this introduction and already observed in [9, 20], is a shift in the peak of a retinal GCell response occurring before the object reaches the centre of its receptive field. In our case, lateral connectivity can enhance the shift improving the mere effect of gain control. The second anticipation effect we observe is a raise in GCells activity before the bar reaches the receptive field of the cell, similarly to what is observed in the cortex [8]. To the best of our knowledge, this effect has not been studied in the retina and constitutes therefore a prediction of our model.

The paper is organised as follows. Section 2 introduces the model of retinal organisation and cell types dynamics, ending up with a system of nonlinear differential equations driven by a time-dependent stimulus. Section 3 is divided into four parts. The first part analyses mathematically the potential anticipation effects in a general setting before considering the role of ACells and lateral inhibition on anticipation (Sect. 3.2) and gap junctions (Sect. 3.3). Both sections contain general mathematical results as well as numerical simulations for one-dimensional motion. The fourth part investigates examples of two-dimensional motions. The last section is devoted to discussion and conclusion. In Appendix A, we have added the values of parameters used in simulations and in Appendix B the receptive fields mathematical form used in the paper as well as the numerical method to compute efficiently the response of oriented two-dimensional receptive fields to spatio-temporal stimuli. Appendix C presents a model of random connectivity from amacrine to bipolar cells inspired from biological data [78]. Finally, Appendix D contains mathematical results which constitute the skeleton of the work, but whose proof would be too long to integrate in the core of the paper. This work is based on Selma Souihel’s PhD thesis where more extensive results can be found [74]. In particular, there is an analysis of the conjugated effects of retinal and cortical anticipation, subject of a forthcoming paper and briefly discussed in the conclusion.

In all the following simulations, we use the CImg Library, an open-source C++ tool kit for image processing in order to load the stimuli and reconstruct the retina activity. The source code is available on demand.

Material and methods

Retinal organisation

In the retinal processing light photons coming from a visual scene are converted into voltage variations by photoreceptors (cones and rods). The complex hierarchical and layered structure of the retina allows to convert these variations into spike trains, produced by ganglion cells (GCells) and conveyed to the thalamus via their axons. We considerably simplify this process here. Light response induces a voltage variations of bipolar cells (BCells), laterally connected via amacrine cells (ACells), and feeding GCells, as depicted in Fig. 1. We describe this structure in details here. Note that neither BCells nor ACells are spiking. They act synaptically on each other by graded variations of their potential.

We assimilate the retina to a flat, two-dimensional square of edge length L mm. Therefore, we do not integrate the three-dimensional structure of the retina in the model, merely for mathematical convenience. Spatial coordinates are noted x, y (see Fig. 2 for the whole structure).

Figure 2

Example of a retina grid tiling and indexing. The green and blue ellipses denote respectively the positive centre and the negative surround of the BCell receptive field \({\mathcal {K}}_{S}\). The centre of RF coincides with the position of the cell (blue and green arrows). The red ellipse denotes the ganglion cell pooling over bipolar cells (Eq. (17))

In the model, each cell population tiles the retina with a regular square lattice. The density of cells is therefore uniform for convenience but the extension to nonuniform density can be afforded. For the population p, we denote by \(\delta _{p}\) the lattice spacing in mm and by \(N_{p}\) the total number of cells. Without loss of generality we assume that L, the retina’s edge size, is a multiple of \(\delta _{p}\). We denote by \(L_{p}=\frac{L}{\delta _{p}}\) the number of cells p per row or column so that \(N_{p}=L_{p}^{2}\). Each cell in the population p has thus Cartesian coordinates \((x,y)=(i_{x} \delta _{p},i_{y} \delta _{p})\), \((i_{x},i_{y}) \in \{ 1, \ldots , L_{p} \} ^{2}\). To avoid multiples indices, we associate with each pair \((i_{x},i_{y})\) a unique index \(i=i_{x}+(i_{y}-1) L_{p}\). The cell of population p located at coordinates \((i_{x} \delta _{p}, i_{y} \delta _{p})\) is then denoted by \({p}_{i}\). We denote by \(d [ {p}_{i}, {p'}_{j} ]\) the Euclidean distance between \({p}_{i}\) and \({p'}_{j}\).

We use the notation \(V_{{p}_{i}}\) for the membrane potential of cell \({p}_{i}\). Cells are coupled. The synaptic weight from cell \({p}_{j}\) to cell \({q}_{i}\) reads \(W^{{p}_{j}}_{{q}_{i}}\). Thus, the pre-synaptic neuron is expressed in the upper index; the post-synaptic, in the lower index. Dynamics of cells is voltage-based. This is because our model is constructed from Chen et al. model [20], itself derived from Berry et al. [9], where a voltage-based description is used. Implicitly, voltage is measured with respect to the rest state of the cell (\(V_{{p}_{i}}=0\) when the cell receives no input).

Bipolar cell layer

The model consists first of a set of \(N_{B}\) BCells, regularly spaced by a distance \(\delta _{B}\), with spatial coordinates \(x_{i}\), \(y_{i}\), \(i=1 , \ldots, N_{B}\). Their voltage, a function of the stimulus, is computed as follows.

Stimulus response and receptive field

The projection of the visual scene on the retina (“stimulus”) is a function \({\mathcal {S}}(x,y,t)\) where t is the time coordinate. As we do not consider colour sensitivity here, \({\mathcal {S}}\) characterises a black and white scene with a control on the level of contrast \(\in [0,1]\). A receptive field (RF) is a region of the visual field (the physical space) in which stimulation alters the voltage of a cell. Thus, BCell i has a spatio-temporal receptive field \({\mathcal {K}}_{{B}_{i}}\), featuring the biophysical processes occurring at the level of the outer plexiform layer (OPL), that is, photo-receptors (rod-cones) response modulated by horizontal cells (HCells). As a consequence, in our model, the voltage of BCell i is stimulus-driven by the term

$$ V_{i_{\mathrm{drive}}}(t)= [ {\mathcal {K}}_{{B}_{i}} \stackrel {x,y,t}{\ast } {\mathcal {S}}](t) = \int _{x=-\infty }^{+\infty } \int _{y=-\infty }^{+ \infty } \int _{s=-\infty }^{t} {\mathcal {K}}(x-x_{i},y-y_{i},t-s) {\mathcal {S}}(x,y,s) \,dx \,dy \,ds, $$

where \(\stackrel {x,y,t}{\ast }\) means space-time convolution. We consider only one family of BCells so that the kernel \({\mathcal {K}}\) is the same for all BCells. What changes is the centre of the RF, located at \(x_{i}\), \(y_{i}\), which also corresponds to the coordinates of the BCell i. We consider in the paper separable kernel \({\mathcal {K}}(x,y,t)={\mathcal {K}}_{S}(x,y) {\mathcal {K}}_{T}(t)\) where \({\mathcal {K}}_{S}\) is the spatial part and \({\mathcal {K}}_{T}\) is the temporal part. The detailed form of \({\mathcal {K}}\) is given in Appendix B.

We have

$$ \frac{d V_{i_{\mathrm{drive}}}}{d t}= \biggl[ {\mathcal {K}}_{{B}_{i}} \stackrel {x,y,t}{\ast } \frac{d {\mathcal {S}}}{dt} \biggr](t), $$

resulting from the condition \({\mathcal {K}}_{{B}_{i}}(x,y,0)=0\) (see Appendix B). Note that the exponential decay of the spatial and temporal part at infinity ensures the existence of the space-time integral. The spatial integral \(\int _{\mathbb {R}^{2}} {\mathcal {K}}_{S}(x,y) S(x,y,u) \,dx \,dy\) is numerically computed using error function in the case of circular RF and a computer vision method from Geusenroek et al. [39] in the case of anisotropic RF, allowing to integrate generalised Gaussians with an efficient computational time. This method is described in Appendix B.

For explanations purposes, we will often use the approximation of \(V_{i_{\mathrm{drive}}}\) by a Gaussian pulse, with width σ, propagating at constant speed v along the direction \(\vec {e}_{x}\):

$$ V_{i_{\mathrm{drive}}}(t)=\frac{A_{0}}{\sqrt{2 \pi } \sigma } e^{- \frac{1}{2} \frac{ ( x - v t )^{2}}{\sigma ^{2}}} \equiv \frac{V_{0}}{\sqrt{2 \pi }} e^{-\frac{1}{2} \frac{ ( x - v t )^{2}}{\sigma ^{2}}}, $$

where \(x=i \delta _{B}\) is the horizontal coordinate of BCell i and where σ is in mm, \(A_{0}\) is in \(\mathrm{mV}.\mathrm{mm}\) (and is proportional to stimulus contrast), \(V_{0}\) is in mV.

BCells voltage and gain control

In our model, the BCell voltage is the sum of external drive (1) received by the BCell and of a post-synaptic potential \(P_{{B}_{i}}\) induced by connected ACells:

$$ V_{{B}_{i}}(t)=V_{i_{\mathrm{drive}}}(t) + P_{{B}_{i}}(t). $$

The form of \(P_{{B}_{i}}\) is given by Eq. (11) in Sect. 2.3.1. \(P_{{B}_{i}}(t)=0\) when no ACells are considered.

BCells have voltage threshold [9]:

$$ {\mathcal {N}}_{B}(V_{{B}_{i}}) = \textstyle\begin{cases} 0, &\mbox{if } V_{{B}_{i}} \le \theta _{B}; \\ V_{{B}_{i}}-\theta _{B}, &\mbox{else}. \end{cases} $$

Values of parameters are given in Appendix A.

BCells have gain control, a desensitisation when activated by a steady illumination [92]. This desensitisation is mediated by a rise in intracellular calcium \(\mathrm{Ca}^{2+}\) at the origin of a feedback inhibition preventing thus prolonged signalling of the ON BCell [20, 73]. Following Chen et al., we introduce the dimensionless activity variable \(A_{{B}_{i}}\) obeying the differential equation

$$ \frac{dA_{{B}_{i}}}{dt} = -\frac{A_{{B}_{i}}}{\tau _{a}} + h_{B} {\mathcal {N}}\bigl(V_{{B}_{i}}(t)\bigr). $$

Assuming an initial condition \(A_{{B}_{i}}(t_{0})=0\) at initial time \(t_{0}\), the solution is

$$ A_{{B}_{i}}(t)=h_{B} \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} {\mathcal {N}}\bigl(V_{{B}_{i}}(s)\bigr) \,ds. $$

The bipolar output to ACells and GCells is then characterised by a nonlinear response to its voltage variation given by

$$ R_{{B}_{i}} ( V_{{B}_{i}}, A_{{B}_{i}} )= {\mathcal {N}}_{B} ( V_{{B}_{i}} ) {\mathcal {G}}_{B} ( A_{{B}_{i}} ), $$


$$ {\mathcal {G}}_{B}(A_{{B}_{i}})= \textstyle\begin{cases} 0, & \mbox{if } A_{{B}_{i}} \le 0; \\ \frac{1}{1+A_{{B}_{i}}^{6}}, & \mbox{else}. \end{cases} $$

Note that \(R_{{B}_{i}}\) has the physical dimension of a voltage, whereas, from Eq. (9), the activity \(A_{{B}_{i}}\) is dimensionless. As a consequence, the parameter \(h_{B}\) in Eq. (6) must be expressed in \(\mathrm{ms}^{-1}\mathrm{mV}^{-1}\). Form (9) and its 6th power are based on experimental fits made by Chen et al. Its form is shown in Fig. 3.

Figure 3

Gain control (9) as a function of activity A. The function \(l(A)\), in dashed line, is a piecewise linear approximation of \({\mathcal {G}}_{B}(A)\) from which three regions are roughly defined. In the region “Silent” the gain vanishes so the cell does not respond to stimuli; in the region “Max”, the gain is maximal so that cell behaviour does not show any difference with a not gain-controlled cell; the region “Fast decay” is the one which contributes to anticipation by shifting the peak in the cell’s activity (see Sect. 3.1). The value \(A_{c}=\frac{2}{3}\) corresponds to the value of activity where gain control, in the piecewise linear approximation, becomes effective

In the course of the paper we will use the following piecewise linear approximation also represented in Fig. 3:

$$ {\mathcal {G}}_{B}(A) = \textstyle\begin{cases} 0, & \mbox{if } A \in \, ]-\infty ,0[ \, \cup\, [\frac{4}{3}, +\infty [,\quad\ \,\, \mbox{silent region}; \\ 1, & \mbox{if } A \in [0,\frac{2}{3}],\qquad \qquad \qquad\quad \mbox{maximal gain}; \\ -\frac{3}{2}A+ 2, & \mbox{if } A \in [\frac{2}{3},\frac{4}{3}],\qquad \qquad \qquad \quad \mbox{fast decay}. \end{cases} $$

Thanks to this approximation, we roughly distinguish three regions for the gain function \({\mathcal {G}}_{B}(A)\). This shape is useful to understand the mechanism of anticipation (Sect. 3.1).

Amacrine cell layer

There is a wide variety of ACells (about 30–40 different types for humans) [64]. Some specific types are well studied such as starburst amacrine cells, which are involved in direction sensitivity [33, 34, 81], as well as contrast impression and suppression of GCells response [58], or AII, a central element of the vertebrate rod-cone pathway [54].

Here, we do not want to consider specific types of ACells with a detailed biophysical description. Instead, we want to point out the potential role they can play in motion anticipation thanks to the inhibitory lateral connectivity they induce. We focus on a specific circuitry involved in differential motion: an object with a different motion from its background induces more salient activity. The mechanism, observed in mice and rabbit retinas [41, 61], is featured in Fig. 1, pathway III. When the left pathway receives a different illumination from the right pathway (corresponding, e.g. to a moving object), this asymmetry is amplified by the ACells’ mutual inhibition, enhancing the response of the left pathway in a “push–pull” effect. We want to propose that such a mutual inhibition circuit, deployed in a lattice through the whole retina, can generate—under specific conditions mathematically analysed—a wave of activity propagation triggered by the moving object.

In the model, ACells tile the retina with a lattice spacing \(\delta _{A}\). We index them with \(j=1 , \ldots, N_{A}\).

Synaptic connections between ACells and BCells

We consider here a simple model of ACells. We assimilate them to passive cells (no active ionic channels) acting as a simple relay between BCells. This aspect is further discussed later in the paper. The ACell \({A}_{j}\), connected to the BCell \({B}_{i}\), induces on the latter the post-synaptic potential:

$$ P^{{A}_{j}}_{{B}_{i}}(t) = W^{{A}_{j}}_{{B}_{i}}(t) \int _{-\infty }^{t} \gamma _{B}(t-s) V_{{A}_{j}}(s) \,ds; \qquad \gamma _{B}(t)=e^{- \frac{t}{\tau _{B}}} H(t), $$

where the Heaviside function H ensures causality. Thus, the post synaptic potential is the mere convolution of the pre synaptic ACell voltage, with an exponential α-profile [28]. In addition, we assume the propagation to be instantaneous.

Here, the synaptic weight \(W^{{A}_{j}}_{{B}_{i}} < 0\) mimics the inhibitory connection from ACell to BCell (glycine or GABA) with the convention that \(W^{{A}_{j}}_{{B}_{i}}=0\) if there is no connection from \({A}_{j}\) to \({B}_{i}\).

In general, several ACells input the BCell \({B}_{i}\) giving a total PSP:

$$ P_{{B}_{i}}(t) = \sum_{j=1}^{N_{B}} W^{{A}_{j}}_{{B}_{i}} \int _{- \infty }^{t} \gamma _{B}(t-s) V_{{A}_{j}}(s) \,ds. $$

Conversely, the BCell \({B}_{i}\) connected to \({A}_{j}\) induces, on this cell, a synaptic response characterised by a post-synaptic potential (PSP) \(P_{{A}_{j}}(t)\). As ACells are passive elements, their voltage \(V_{{A}_{j}}(t)\) is equal to this PSP. We have thus

$$ V_{{A}_{j}}(t) = \sum_{i=1}^{N_{A}} W^{{B}_{i}}_{{A}_{j}} \int _{- \infty }^{t} \gamma _{A}(t-s) R_{{B}_{i}}(s) \,ds $$

with \(\gamma _{A}(t)=e^{-\frac{t}{\tau _{A}}} H(t)\). Here, \(W^{{B}_{i}}_{{A}_{j}} > 0\) corresponding to the excitatory effect of BCells on ACells through a glutamate release. Note that the voltage of the BCell is rectified and gain-controlled.


The coupled dynamics of bipolar and amacrine cells can be described by a dynamical system that we derive now.

Bipolar voltage

By differentiating (11), (4) and introducing

$$ F_{{B}_{i}}(t)= \biggl[ {\mathcal {K}}_{{B}_{i}} \stackrel {x,y,t}{\ast } \biggl( \frac{{\mathcal {S}}}{\tau _{B}} + \frac{d {\mathcal {S}}}{dt} \biggr) \biggr](t) = \frac{V_{i_{\mathrm{drive}}}}{\tau _{B}} + \frac{d V_{i_{\mathrm{drive}}}}{d t}, $$

we end up with the following equation for the bipolar voltage:

$$ \frac{dV_{{B}_{i}}}{d t} = - \frac{1}{\tau _{B}} V_{{B}_{i}} + \sum_{j=1}^{N_{A}} W^{{A}_{j}}_{{B}_{i}} V_{{A}_{j}} + F_{{B}_{i}}(t), $$

where we have used (2). This is a differential equation driven by the time-dependent term \(F_{{B}_{i}}\) containing the stimulus and its time derivative.

To illustrate the role of \(F_{{B}_{i}}\), let us consider an object moving with a speed v⃗ depending on time, thus with a nonzero acceleration \(\vec {\gamma }=\frac{d \vec {v}}{dt}\). This stimulus has the form \({\mathcal {S}}(x,y,t)=g ( \vec {X}-\vec {v}(t) t )\) with X =( x y ), so that \(\frac{d {\mathcal {S}}}{dt}=- \vec {\nabla }g ( \vec {X}-\vec {v}(t) t ). ( \vec {v}+ \vec {\gamma }t )\), where ⃗ denotes the gradient. Therefore, thanks to Eq. (14), BCells are sensitive to changes in directions, thereby justifying a study of two-dimensional stimuli (Sect. 3.4). Note that this property is inherited from the simple, differential structure of the dynamics, the term \(\frac{d V_{i_{\mathrm{drive}}}}{d t}\) resulting from the differentiation of \(V_{{B}_{i}}\). This term does not appear in the classical formulation (1) of the bipolar response without amacrine connectivity. It appears here because synaptic response involves an implicit time derivative via convolution (12).

Coupled dynamics

Likewise, differentiating (12) gives

$$ \frac{d V_{{A}_{j}}}{d t} = - \frac{1}{\tau _{A}} V_{{A}_{j}}+ \sum_{i=1}^{N_{B}} W^{{B}_{i}}_{{A}_{j}} R_{{B}_{i}}. $$

Equation (6) (activity), (14) and (15) define a set of \(2 N_{B} +N_{A}\) differential equations, ruling the behaviour of coupled BCells and ACells, under the drive of the stimulus, appearing in the term \(F_{{B}_{i}}(t)\). We summarise the differential system here:

$$ \textstyle\begin{cases} \frac{dV_{{B}_{i}}}{d t} = - \frac{1}{\tau _{B}} V_{{B}_{i}} + \sum_{j=1}^{N_{A}} W^{{A}_{j}}_{{B}_{i}} V_{{A}_{j}} + F_{{B}_{i}}(t), \\ \frac{d V_{{A}_{j}}}{d t} = - \frac{1}{\tau _{A}} V_{{A}_{j}}+ \sum_{i=1}^{N_{B}} W^{{B}_{i}}_{{A}_{j}} R_{{B}_{i}}, \\ \frac{dA_{{B}_{i}}}{dt} = -\frac{A_{{B}_{i}}}{\tau _{a}} + h_{B} {\mathcal {N}}(V_{{B}_{i}}). \end{cases} $$

We have used the classical dynamical systems convention where time appears explicitly only in the driving term \(F_{{B}_{i}}(t)\) to emphasise that (16) is non-autonomous. Note that BCells act on ACells via a rectified voltage (gain control and piecewise linear rectification), in agreement with Fig. 1, pathway III. We analyse this dynamics in Sect. 3.2.1.

Connectivity graph

The way ACells connect to BCells and reciprocally have a deep impact on dynamics (16). In this paper, we want to point out the role of relative excitation (from BCells to ACells) and inhibition (from ACells to BCells) as well as the role of the network topology. For mathematical convenience when dealing with square matrices, we assume from now on that there are as many BCell as ACells, and we set \(N \equiv N_{A}=N_{B}\). At the core of our mathematical studies is a matrix \({\mathcal {L}}\), defined in Sect. 3.2.1, whose spectrum conditions the evolution of the BCells–ACells network under the influence of a stimulus. It is interesting and relevant to relate the spectrum of \({\mathcal {L}}\) to the spectrum of the connectivity matrices ACells to BCells and BCells to ACells. There is not such a general relation for arbitrary matrices of connectivity. A simple case holds when the two connectivity matrices commute. Here, we choose an even simpler situation based on the fact that we compare the role of the direct feed-forward pathway on anticipation in the presence of ACell lateral connectivity. We feature the direct pathway by assuming that a BCell connects only one ACell with a weight \(w^{+}\) uniform for all BCell, so that \(W^{{B}}_{{A}} = w^{+} I_{N,N}\), \(w^{+}>0\), where \(I_{N,N}\) is the N-dimensional identity matrix. In contrast, we assume that ACells connect to BCells with a connectivity matrix \({\mathcal {W}}\), not necessarily symmetric, with a uniform weight \(- w^{-}\), \(w^{-}>0\), so that \(W^{{A}}_{{B}} = -w^{-} {\mathcal {W}}\).

We consider then two types of network topology for \({\mathcal {W}}\):

  1. 1.

    Nearest neighbours. An ACell connects its 2d nearest BCell neighbours where \(d=1,2\) is the lattice dimension.

  2. 2.

    Random ACell connectivity. This model is inspired from the paper [78] on the shape and arrangement of starburst ACells in the rabbit retina. Each cell (ACell and BCell) has a random number of branches (dendritic tree), each of which has a random length and a random angle with respect to the horizontal axis. The length of branches L follows an exponential distribution with spatial scale ξ. The number of branches n is also a random variable, Gaussian with mean and variance \(\sigma _{n}\). The angle distribution is taken to be isotropic in the plane, i.e. uniform on \([0,2 \pi [\). When a branch of an ACell A intersects a branch of a BCell B, there is a chemical synapse from A to B. The probability that two branches intersect follows a nearly exponential probability distribution that can be analytically computed (see Appendix C).

Ganglion cells

There are many different types of GCells in the retina, with different physiologies and functions [3, 70]. In the present computational study, we focus on specific subtypes associated with pathways I–II (fast OFF cells with gain control), III (differential motion sensitive cells) and IV (direction selective cells) in Fig. 1. All these have common features: BCells pooling and gain control.

BCells pooling

In the retina, GCells of the same type cover the surface of the retina, forming a mosaic. The degree of overlap between GCells indicates the extent to which their dendritic arbours are entangled in one another. This overlap remains, however, very limited between cells of the same type [68]. We denote by k the index of the GCells, \(k=1 , \ldots, N_{G}\), and by \(\delta _{G}\) the spacing between two consecutive GCells lying on the grid (Fig. 2).

In the model, GCell k pools over the output of BCells in its neighbourhood [20]. Its voltage reads as follows:

$$ V_{{G}_{k}}^{(P)} = \sum _{i} W^{{B}_{i}}_{{G}_{k}} R_{{B}_{i}}, $$

where the superscript “P” stands for “pool”. We use this notation to differentiate this voltage from the total GCell voltage \(V_{{G}_{k}}\) when they are different. This happens in the case when GCells are directly coupled by gap junctions (Sects. 2.4.4, 3.3). When there is no ambiguity, we will drop the superscript “P”. In Eq. (17), the weights \(W^{{B}_{i}}_{{G}_{k}}\) are Gaussian:

$$ W^{{B}_{i}}_{{G}_{k}}=a_{p} e^{- \frac{d^{2} [ {B}_{i}, {G}_{k} ]}{2 \sigma _{p}^{2}}}, $$

where \(\sigma _{p}\) has the dimension of a distance and \(a_{p}\) is dimensionless.

Ganglion cell response

The voltage \(V_{{G}_{k}}\) is processed through a gain control loop similar to the BCell layer [20]. As GCells are spiking cells, a nonlinearity is fixed so as to impose an upper limit over the firing rate. Here, it is modelled by a sigmoid function, e.g.

$$ {\mathcal {N}}_{G} ( V )= \textstyle\begin{cases} 0, &\mbox{if } V \le \theta _{G}; \\ \alpha _{G}(V-\theta _{G}), &\mbox{if } \theta _{G} \le V \le N_{G}^{\mathrm{max}}/\alpha _{G} + \theta _{G}; \\ N_{G}^{\mathrm{max}}, &\mbox{else}. \end{cases} $$

This function corresponds to a probability of firing in a time interval. Thus, it is expressed in Hz. Consequently, \(\alpha _{G}\) is expressed in \(\mathrm{Hz}\, \mathrm{mV}^{-1}\) and \(N_{G}^{\mathrm{max}}\) in Hz. Parameter values can be found in Appendix A.

Gain control is implemented with an activation function \(A_{{G}_{k}}\), solving the following differential equation:

$$ \frac{dA_{{G}_{k}}}{dt} = -\frac{A_{{G}_{k}}}{\tau _{G}} + h_{G} {\mathcal {N}}_{G} ( V_{{G}_{k}} ), $$

and a gain function

$$ {\mathcal {G}}_{G}(A)= \textstyle\begin{cases} 0, & \mbox{if } A < 0; \\ \frac{1}{1+A}, & \mbox{else}. \end{cases} $$

Note that the origin of this gain control is different from the BCell gain control (9). Indeed, Chen et al. hypothesise that the biophysical mechanisms that could lie behind ganglion gain control are spike-dependent inactivation of \(\mathrm{Na}^{+}\) and \(\mathrm{K}^{+}\) channels, while the study by Jacoby et al. [45] hypothesises that GCells gain control is mediated by feed-forward inhibition that they receive from ACells. The specific forms of the nonlinearity and the gain control function used in this paper match, however, the first hypothesis, namely the suppression of the \(\mathrm{Na}^{+}\) current [20].

Finally, the response function of this GCell type is

$$ R_{{G}} ( V_{{G}_{k}},A_{{G}_{k}} )={\mathcal {N}}_{G}(V_{{G}_{k}}) {\mathcal {G}}_{G}(A_{{G}_{k}}). $$

In contrast to BCell response \(R_{{B}}\) (8), which is a voltage, here \(R_{{G}}\) is a firing rate.

Gain control has been reported for OFF GCells only [9, 20]. Therefore, we restrict our study to OFF cells, i.e with a negative centre of the spatial RF kernel. However, on mathematical grounds, it is easier to carry our explanation when the RF centre is positive. Thus, for convenience, we have adopted a change in convention in terms of contrast measurement. We take the reference value 0 of the stimulus to be white rather than black, black corresponding then to 1. The spatial RF kernel is also inverted, with a positive centre and a negative surround. The problem is therefore mathematically equivalent to an ON cell submitted to positive stimulus.

Differential motion sensitive cells

We consider here a class of GCells connected to ACells according to pathways III in Fig. 1, acting as differential motion detectors. They are able to respond saliently to an object moving over a stationary surround while being strongly inhibited by global motion. Here, stationary is meant in a general, probabilistic sense. This can be a uniform background or a noisy background where the probability distribution of the noise is time-translation invariant. These cells are hence able to filter head and eye movements. Baccus et al. [1] emphasised a pathway accountable for this type of response involving polyaxonal ACells which selectively suppress GCells response to global motion and enhance their response to differential motion, as shown in Fig. 1, pathway III. The GCell receives an excitatory input from the BCells lying in its receptive field which respond to the central object motion and an indirect inhibitory input from ACells that are connected to BCells which respond to the background motion. When motion is global, the excitatory signal is equivalent to the inhibitory one, resulting in an overall suppression. However, when the object in the centre moves distinctively from the surrounding background, the cell in the centre responds strongly.

There are here two concomitant effects. When a moving object (say, from left to right) enters the BCell pool connected to a central GCell \(k_{D}\), the BCells in the periphery of the pool respond first, with no significant change on the GCell response, because of the Gaussian shape (18) of the pooling: weights are small in the periphery. Those BCells excite, however, the ACells they are connected to, with the effect of inhibiting the BCells of neighbouring GCells pools. This has the effect of decreasing the voltage of these BCells which in turn excite less ACells which, in turn, inhibit less the BCells of the pool \(k_{D}\). Thus, the response of the GCell \(k_{D}\) is enhanced, while the cells on the background are inhibited. We call this effect “push–pull” effect. Note that propagation delays ought to play an important role here, although we are not going to consider them in this paper.

Direction selective GCells and gap junction connectivity

These cells correspond to pathway IV in Fig. 1. They are only coupled via electric synapses (gap junctions). In several animals, like the mouse, this enables the corresponding GCells to be direction sensitive. Note that other mechanisms, involving lateral inhibition via starburst amacrine cells have also been widely reported [33, 34, 71, 72, 81, 85, 88]. Here we focus on gap junctions direction sensitive cells (DSGCs). There exist four major types of these DSGCs, each responding to edges moving in one of the four cardinal directions. Trenholm et al. [80] emphasised the role of these cells coupling in lag normalisation: uncoupled cells begin responding when a bar enters their receptive field, i.e. their dendritic field extension, whereas coupled cells start responding before the bar reaches their dendritic field. This anticipated response is due to the effective propagation of activity from neighbouring cells through gap junctions and is particularly interesting when comparing the responses for different velocities of the bar. Trenholm et al. showed that the uncoupled DSGCs detect the bar at a position which is further shifted as the velocity grows, while coupled cells respond at an almost constant position regardless of the velocity. In our work, we analyse this effect in terms of a propagating wave driven by the stimulus and show that temporally this spatial lag normalisation induces a motion extrapolation that confers to the retina more than just the ability to compensate for processing delays, but to anticipate motion.

Classical, symmetric bidirectional gap junctions coupling between neighbouring cells would involve a current of the form \(-g (V_{{G}_{k}}-V_{{G}_{k-1}})-g(V_{{G}_{k}}-V_{{G}_{k+1}})\), where g is the gap junction conductance. In contrast, here, the current takes the form \(-g (V_{{G}_{k}}-V_{{G}_{k-1}})\). This is due to the specific asymmetric structure of the direction selective GCell dendritic tree [80]. The experimental results of these authors suggest that the effect of the possible gap junction input from downstream cells, in the direction of motion, can be neglected due to offset inhibition and gain control suppression. This, along with the asymmetry of the dendritic arbour, justifies the approximation whereby the cell k+1 does not influence the current in the cell k. This induces a strong difference in the propagation of a perturbation. Indeed, consider the case \(V_{{G}_{k}}-V_{{G}_{k-1}}=V_{{G}_{k}}-V_{{G}_{k+1}}=\delta \). In the symmetric form the total current vanishes, whereas in the asymmetric form the current is \(-g \delta \). Still, the current can have both directions depending on the sign of δ. This has a strong consequence on the way GCells connected by gap junctions respond to a propagating stimulus, as shown in Sect. 3.3.

The total GCell voltage is the sum of the pooled BCell voltage \(V_{{G}_{k}}^{(P)}\) and of the effect of neighbours GCells connected to k by gap junctions:

$$ V_{{G}_{k}} (t) = V_{{G}_{k}}^{(P)} - \frac{g}{C} \int _{-\infty }^{t} \bigl(V_{{G}_{k}} (s) - V_{{G}_{k-1}} (s)\bigr)\,ds, $$

where C is the membrane capacitance. Deriving the previous equation with respect to time, we obtain the following differential equation governing the GCell voltage:

$$ \frac{dV_{{G}_{k}}}{dt} = \frac{dV_{{G}_{k}}^{(P)}}{dt} - w_{\mathrm{gap}} \bigl[ V_{{G}_{k}} (t) - V_{{G}_{k-1}} (t) \bigr], $$


$$ w_{\mathrm{gap}}=\frac{g}{C}. $$

Gain control is then applied on \(V_{{G}_{k}}\) as in (22). An alternative is to consider that gain control occurs before gap junctions effect. We investigated this effect as well (not shown, see [74]). Mainly, the anticipatory effect is enhanced when the gain control is applied after the gap junction coupling; thus, from now, we focus on the formulation (23) in the paper.

Note that our voltage-based model of gap junctions takes a different from as Trenholm et al. (expressed in terms of currents), because we had to adapt it so as to deal with the pooling voltage form (17). Still, our model reproduces the lag normalisation as in the original model as we checked (not shown, see [74]).


The mechanism of motion anticipation and the role of gain control

The (smooth) trajectory of a moving object can be extrapolated from its past position and velocity to obtain an estimate of its current location [4, 56, 57]. When human subjects are shown a moving bar travelling at constant velocity, while a second bar is briefly flashed in alignment with the moving bar, the subjects report seeing the flashed bar trailing behind the moving bar. This led Berry et al. [9] to investigate the potential role of the retina in anticipation mechanisms. Under constraints on the bar speed and contrast they were able to exhibit a positive anticipation time, defined as the time lag between the peak in the retinal GCell response to a flashed bar and the corresponding peak when the stimulus is a moving bar.

In this paper we adopt a slightly different definition although inspired by it. Indeed, the goal of this modelling paper is to dissect the various potential stages of retinal anticipation as developed in the next subsections.

Several layers and mechanisms are involved in the model, each one defining a response time and potentially contributing to anticipation under conditions that we now analyse.

Anticipation at the level of a single isolated BCell; the local effect of gain control

We consider first a single BCell without lateral connectivity so that \(V_{{B}_{i}}=V_{i_{\mathrm{drive}}}\). The very mechanism of anticipation at this stage is illustrated in Fig. 4. The peak response time of the convolution of the stimulus with the RF of one BCell occurs at a time \(t_{B}\) (dashed line in Fig. 4(a)). The increase in \(V_{i_{\mathrm{drive}}}\) leads to an increase in activity (Fig. 4(c)) and an increase of \(R_{{B}}\) (Fig. 4(e)). When activity becomes large enough, gain control switches on (Fig. 4(d)) leading to a sharp decrease of the response \(R_{{B}}\) (Fig. 4(e)) and a peak in \(R_{{B}}\) occurring at time \(t_{B_{A}}\) (dashed line in Fig. 4(e)) before \(t_{B}\). The bipolar anticipation time, \(\Delta _{B}= t_{B} - t_{B_{A}}\), is therefore positive.

Figure 4

The mechanism of motion anticipation and the role of gain control. The figure illustrates the bipolar anticipation time \(\Delta _{B}\) without lateral connectivity. We see the response of OFF BCells with gain control to a dark moving bar. The curves correspond to three cells spaced by \(450~\mu \text{m}\). The first line (a) shows the linear filtering of the stimulus corresponding to \(V_{\mathrm{drive}}(t)\) (Eq. (1)). Line (b) corresponds to the threshold nonlinearity \({{\mathcal {N}}}_{B}\) applied to the linear response; (c) represents the adaptation variable (16), and (d) shows the gain control time curse. Finally, the last line (e) corresponds to the response \(R_{{B}_{i}}\) of the BCell. The two dashed lines correspond respectively to \(t_{B}\) and \(t_{B_{A}}\), the peak in the response of the (purple) BCell without pooling

Mathematically, \(\Delta _{B} > 0\) results from the intermediate value theorem using that \(\frac{d V_{i_{\mathrm{drive}}}}{d t} \geq 0\) on \([ 0,t_{B} ]\) and that \(t_{B_{A}}\) is defined by

$$ \frac{d V_{i_{\mathrm{drive}}}}{dt}\bigg|_{t=t_{B_{A}}} = - V_{i_{\mathrm{drive}}}(t_{B_{A}}) \frac{{\mathcal {G}}'_{B} ( A_{{B}_{i}} )}{{\mathcal {G}}_{B} ( A_{{B}_{i}} )} \frac{dA_{{B}_{i}}}{dt}\bigg|_{t=t_{B_{A}}}, $$

where the right-hand side is positive provided that the parameters \(h_{B}\), \(\tau _{a}\) are tunedFootnote 1 such that \(\frac{d A_{{B}_{i}}}{dt} \geq 0\) on \([0,t_{B}]\). An important consequence is that the amplitude of the response at the peak is smaller in the presence of gain control (compare the amplitude of the voltage in Fig. 4(a) to 4(e)).

The anticipation time at the BCell level depends on parameters such as \(h_{B}\), \(\tau _{a}\). It depends as well on characteristics of the stimulus such as contrast, size and speed. An easy way to figure this out is to consider that the peak in BCell response (Fig. 4(d), (e)) arises when the gain control function \({\mathcal {G}}_{B} ( A_{{B}_{i}} )\) starts to drop off (Fig. 4(e)), which from the piecewise linear approximation (10) of BCell arises when \(A=\frac{2}{3}\). When \(V_{i_{\mathrm{drive}}}\) has the form (3). this gives, using \({\mathcal {N}}(V_{i_{\mathrm{drive}}})=V_{i_{\mathrm{drive}}}\) (7) and letting the initial time \(t_{0} \to -\infty \) (which corresponds to assuming that the initial state was taken in a distant past, quite longer than the time scales in the model):

$$ A_{{B}_{i}}(t_{B_{A}}) = A_{0} \frac{h_{B}}{v} e^{\frac{1}{2} \frac{\sigma ^{2}}{\tau _{a}^{2} v^{2}}} e^{ \frac{1}{\tau _{a} v} ( x-v t_{B_{A}} )} \biggl[ 1 - \Pi \biggl( \frac{x-v t_{B_{A}}}{\sigma }+ \frac{\sigma }{\tau _{a} v} \biggr) \biggr] = \frac{2}{3}, $$

where \(\Pi (x)\) is the cumulative distribution function of the standard Gaussian probability (see definition, Eq. (60) in Appendix B). This establishes an explicit equation for the time \(t_{B_{A}}\) as a function of contrast (\(A_{0}\)), size (σ) and speed (v) as well as the parameters \(h_{B}\) and \(\tau _{a}\). We do not show the corresponding curves here (see [74] for a detailed study) preferring to illustrate the global anticipation at the level of GCells, illustrated in Fig. 5 below.

Anticipation time of the BCells pooled voltage

The main effects we want to illustrate in the paper (impact of lateral connectivity on GCell anticipation) are evidenced by the shift of the peak in activity of the BCells pooled voltage occurring at time \(t_{G}\). We focus on this time here, postponing to Sect. 3.1.3 the subsequent effect of GCells gain control. We assume therefore here that \(h_{G}=0\) so that \(A_{{G}_{k}}=0\) and \({\mathcal {G}}_{G}(A_{{G}_{k}})=1\) in (19). Thus, the firing rate of GCell k is \({\mathcal {N}}_{G}(V_{{G}_{k}})\). For mathematical simplicity, we will consider that the firing rate function (5) of G is a smooth, monotonously increasing sigmoid function such that \({\mathcal {N}}'_{G}(V_{{G}_{k}}) >0\). We define \(t_{G}\) as the time when \(V_{{G}_{k}}\) is maximum, after the stimulus is switched on. This corresponds to \(\frac{d V_{{G}_{k}}}{dt}=0\) and \(\frac{d^{2} V_{{G}_{k}}}{dt^{2}} < 0\). Equivalently, from Eqs. (17), (23), we have

$$ \begin{aligned} \sum_{i} W^{{B}_{i}}_{{G}_{k}} \frac{d R_{{B}_{i}}}{dt} &= \sum_{i} W^{{B}_{i}}_{{G}_{k}} \biggl[ {\mathcal {G}}_{B} ( A_{{B}_{i}} ) {\mathcal {N}}'_{B}(V_{{B}_{i}}) \frac{d V_{{B}_{i}}}{dt} + {\mathcal {N}}_{B}(V_{{B}_{i}}) {\mathcal {G}}'_{B} ( A_{{B}_{i}} ) \frac{dA_{{B}_{i}}}{dt} \biggr] \\ &= w_{\mathrm{gap}} [ V_{{G}_{k}} - V_{{G}_{k-1}} ], \end{aligned} $$

where this equation holds at time \(t=t_{G}\) (we have not written explicitly \(t_{G}\) to alleviate notation). This is the most general equation for the anticipation time at the level of BCells pooling.

In the sum \(\sum_{i}\), there are two types of BCells. The inactive ones where \(V_{{B}_{i}} \leq \Theta _{B}\), \({\mathcal {N}}_{B}(V_{{B}_{i}})=0\) and \(\frac{d R_{{B}_{i}}}{dt}=0\), so they do not contribute to the activity. The active BCells \(V_{{B}_{i}} > \Theta _{B}\) obey \({\mathcal {N}}_{B} ( V_{{B}_{i}} ) = V_{{B}_{i}}\). For the moment we assume that, at time \(t_{G}\), there is no BCell switching from one state (active/inactive) to the other, postponing this case to the end of the section. Then, Eq. (26) reduces to

$$ \begin{aligned} &\sum_{i} \underbrace{W^{{B}_{i}}_{{G}_{k}}}_{(\mathrm{V})} \underbrace{ {\mathcal {G}}_{B} ( A_{{B}_{i}} )}_{(\mathrm{II})} \Biggl( - \frac{1}{\tau _{B}} V_{{B}_{i}} + \underbrace{\sum _{j=1}^{N_{A}} W^{{A}_{j}}_{{B}_{i}} V_{{A}_{j}}}_{(\mathrm{III})} + \underbrace{F_{{B}_{i}}(t)}_{(\mathrm{I})} \Biggr) \\ &\quad = -\sum_{i} \underbrace{W^{{B}_{i}}_{{G}_{k}}}_{(\mathrm{V})} \underbrace{{\mathcal {G}}'_{B} ( A_{{B}_{i}} )}_{(\mathrm{II})} V_{{B}_{i}}(t) \frac{d A_{{B}_{i}}}{dt} + \underbrace{w_{\mathrm{gap}} [ V_{{G}_{k}} - V_{{G}_{k-1}} ]}_{(\mathrm{IV})}. \end{aligned} $$

This general equation emphasises the respective role of (I), stimulus (term \(F_{{B}_{i}}(t)\)); (II), gain control (terms \({\mathcal {G}}_{B} ( A_{{B}_{i}} )\), \({\mathcal {G}}'_{B} ( A_{{B}_{i}} )\)); (III), ACell lateral connectivity (term \(W^{{A}_{j}}_{{B}_{i}}\)); (IV), gap junctions (term \(w_{\mathrm{gap}} [ V_{{G}_{k}} (t'_{G_{A}}) - V_{{G}_{k-1}} (t'_{G_{A}}) ]\)); (V), pooling (terms \(W^{{B}_{i}}_{{G}_{k}}\)). Note that we could as well consider a symmetric gap junctions connectivity where we would have a term \(w_{\mathrm{gap}} [ -V_{{G}_{k+1}} + 2 V_{{G}_{k}} - V_{{G}_{k-1}} ]\) in IV. The equation terms have been arranged this way for reasons that become clear in the next lines. It is not possible to solve this equation in full generality, but it can be used to understand the respective role of each component.

In the absence of gain control and lateral connectivity (\(W^{{A}_{j}}_{{B}_{i}}=0\), \(w_{\mathrm{gap}}=0\)), the peak in GCell \({G}_{k}\) voltage at time \(t'_{G}\) is given by

$$ \sum_{i} W^{{B}_{i}}_{{G}_{k}} \frac{d V_{i_{\mathrm{drive}}}}{d t} = 0. $$

This generalises the definition of \(t_{B}\), time of peak of a single BCell, to a set of pooled BCells, and we will proceed along the same lines as in Sect. 3.1.1. We fix as reference time 0 the time when the pooled voltage becomes positive. It increases then until the time \(t'_{G}\) when \(\sum_{i} W^{{B}_{i}}_{{G}_{k}} \frac{d V_{i_{\mathrm{drive}}}}{d t}=0\). Thus, \(\sum_{i} W^{{B}_{i}}_{{G}_{k}} \frac{d V_{i_{\mathrm{drive}}}}{d t}\) is positive on \([0,t'_{G}[ \) and vanishes at \(t'_{G}\).

We now show that, in the presence of gain control, the peak occurs at time \(t_{G} < t'_{G}\) leading to anticipation induced by gain control and generalising the effect observed for one BCell in Sect. 3.1.1. Indeed, Eq. (27) reads now as follows:

$$ \sum_{i} W^{{B}_{i}}_{{G}_{k}} {\mathcal {G}}_{B} ( A_{{B}_{i}} ) \frac{d V_{i_{\mathrm{drive}}}}{d t} = -\sum _{i } W^{{B}_{i}}_{{G}_{k}} {\mathcal {G}}'_{B} ( A_{{B}_{i}} ) V_{i_{\mathrm{drive}}}(t) \frac{d A_{{B}_{i}}}{dt}. $$

Because \(0 \leq {\mathcal {G}}_{B} ( A_{{B}_{i}} ) \leq 1\), \(\sum_{i} W^{{B}_{i}}_{{G}_{k}} {\mathcal {G}}_{B} ( A_{{B}_{i}} ) \frac{d V_{i_{\mathrm{drive}}}}{d t} \leq \sum_{i} W^{{B}_{i}}_{{G}_{k}} \frac{d V_{i_{\mathrm{drive}}}}{d t}\) so that the left-hand side in (29) reaches 0 at a time \(t_{G} \leq t'_{G}\). The right-hand side is positive for the same reasons as in Sect. 3.1.1. The same mathematical argument holds as well, using the intermediate value theorem to show that \(t_{G} < t'_{G}\).

We now investigate Eq. (27) with the two terms of lateral connectivity: (III) ACells and (IV) gap junctions. The effect of gap junctions is straightforward. A positive term \(w_{\mathrm{gap}} [ V_{{G}_{k}} - V_{{G}_{k-1}} ]\) increases the right-hand side of Eq. (27). As developed in Sect. 3.3, this arises when the stimulus propagates in the preferred direction of the cell inducing a wave of activity propagating ahead of the stimulus. In view of the qualitative argument developed above using the intermediate value theorem, this can enhance the anticipation time. This deserves, however, a deeper study developed in Sect. 3.3.

The effect of ACells cells is less evident, as the term \(( - \frac{1}{\tau _{B}} V_{{B}_{i}} + \sum_{j=1}^{N_{A}} W^{{A}_{j}}_{{B}_{i}} V_{{A}_{j}} + F_{{B}_{i}}(t) )\) can have any sign, so that network effect can either anticipate or delay the ganglion response, as illustrated in several examples in the next section. As we show, this term is in general related to a wave of activity, enhancing or weakening the anticipation effect as shown in Sect. 3.2.

Let us finally discuss what happens when some BCell switches from one state (active/inactive) to the other (i.e. \(V_{{B}_{i}} = \Theta _{B}\)). In this case, taking into account the definition (5), the derivative \({\mathcal {N}}'_{B}(V_{{B}_{i}})=\frac{1}{2}\). Thus, when a BCell reaches the lower threshold, there is a big variation in \({\mathcal {N}}'_{B}(V_{{B}_{i}})\) thereby leading to a positive contribution in (26) and an additional term \(\frac{1}{2} \sum_{i} W^{{B}_{i}}_{{G}_{k}} {\mathcal {G}}_{B} ( A_{{B}_{i}} ) ( - \frac{1}{\tau _{B}} V_{{B}_{i}} + \sum_{j=1}^{N_{A}} W^{{A}_{j}}_{{B}_{i}} V_{{A}_{j}} + F_{{B}_{i}}(t) )\) in the left-hand side of (27), where the sum holds on switching state cells. As we see in section (3.2), this can have an important impact on the anticipation time.

Anticipation time at the GCell level

We now show that the firing rate of the GCell k, given by (22), reaches its maximum at a time \(t_{G_{A}} < t_{G}\). From (22), at time \(t_{G_{A}}\):

$$ \frac{d V_{{G}_{k}}}{dt}= \frac{V_{{G}_{k}}}{1+A_{{G}_{k}}} \frac{d A_{{G}_{k}}}{dt}.$$

\(V_{{G}_{k}}\) starts from 0 and increases on the time interval \([ 0,t_{G} ]\), thus \(\frac{d V_{{G}_{k}}}{dt}\) is positive on \([ 0,t_{G} ]\) and vanishes at \(t_{G}\). Thus, there is a time \(t_{d} < t_{G}\) such that \(\frac{d V_{{G}_{k}}}{dt}\) increases on \([0,t_{d}]\) and decreases on \([ t_{d},t_{G} ]\). The right-hand side of (30) starts from 0 at \(t=0\) and stays strictly positive until either \(V_{{G}_{k}}\) vanishes, which occurs for \(t > t_{G}\), or until \(\frac{d A_{{G}_{k}}}{dt}\) vanishes. We choose the characteristic time \(\tau _{G}\) and the intensity \(h_{G}\) in (20) so that \(\frac{d A_{{G}_{k}}}{dt}>0\) on \([ 0,t_{G} ]\). Thus, \(\frac{V_{{G}_{k}}}{1+A_{{G}_{k}}} \frac{d A_{{G}_{k}}}{dt} >0\) on \([ 0,t_{G} ]\). Therefore, in the time interval \([ t_{d},t_{G} ]\), \(\frac{d V_{{G}_{k}}}{dt}\) decreases to 0, while \(\frac{V_{{G}_{k}}}{1+A_{{G}_{k}}} \frac{d A_{{G}_{k}}}{dt}\) increases from 0. From the intermediate value theorem, these two curves have to intersect at a time \(t_{G_{A}} < t_{G}\).

We finally define the total anticipation time of a GCell as follows:

$$ \Delta = t_{B_{c}} - t_{G_{A}}, $$

where \(t_{B_{c}}\) is the peak of the BCell at the centre of the BCells pooling to that GCell.

Anticipation variability: stimulus characteristics

In general, Δ depends on gain control, lateral connectivity as well as characteristics of the stimulus such as speed and contrast. This has been shown mathematically in Eq. (25) for a single BCell. Here, we investigate numerically the dependence of the total anticipation time of a GCell when the stimulus is a bar of infinite height, width σ mm, travelling in one dimension at speed v mm/s with contrast \(C \in [ 0,1 ]\). Results are shown in Fig. 5. This figure is a calibration later used to compare to the effects induced by lateral connectivity.

Figure 5

Maximum firing rate and anticipation time variability with stimulus parameters in the gain control layer of the model. Left: contrast (with \(v = 1\text{ mm/s}\) et size = 90 μm); middle: size (with \(v = 2\text{ mm/s}\) et contrast = 1); right: speed (with contrast 1 and size = 162 μm)

We first observe that anticipation increases with contrast, as it has experimentally been observed [9]. Indeed, increasing the contrast increases \(V_{i_{\mathrm{drive}}}(t)\) thereby accelerating the growth of \(A_{i}\) so that gain control takes place earlier (Fig. 5(a)). We also notice that anticipation increases with the width of the object until a maximum (Fig. 5(b)). Finally, the model shows a decrease in anticipation as a function of velocity, as it was evidenced experimentally [9, 47] (Fig. 4(c)). Indeed, when the velocity increases, \(V_{\mathrm{drive}}\) varies faster than the characteristic activation time \(\tau _{a}\), and the adaptation peak value is lower. Consequently, gain control has a weaker effect and the peak activity is less shifted than when the bar is slow.

A large part of these effects can be understood from Eq. (25). Note, however, here that simulation of Fig. 5 takes into account the convolution of a moving bar with the receptive field, the pooling effect and gain control at the stage of GCells.

In Fig. 5 we also show the evolution of GCells maximum firing rate as a function of the moving bar velocity, contrast and size. We observe that it increases with these parameters, an expected result.

The potential role of ACell lateral inhibition on anticipation

In this section we study the potential effect of ACells (pathway III of Fig. 1) on motion anticipation. We restrict to the case where there are as many BCells as ACells (\(N_{B}=N_{A} \equiv N\)) so that the matrices \(W^{{A}}_{{B}}\) and \(W^{{B}}_{{A}}\) are square matrices. We first derive general mathematical results (for the full derivation, see Appendix D) before considering the two types of connectivity described in Sect. 2.3.3.

Mathematical study

Dynamical system

We study mathematically dynamical system (16) that we write in a more convenient form. We use Greek indices \(\alpha ,\beta ,\gamma = 1 , \ldots, 3N\) and define the state vector \({\mathcal {X}}\) as follows:

$$ \vec {{\mathcal {X}}}_{\alpha }= \textstyle\begin{cases} V_{{B}_{i}}, & \alpha =i, i=1 , \ldots, N; \\ V_{{A}_{i}}, & \alpha =N+i, i=1 , \ldots, N; \\ A_{i}, & \alpha =2N+i, i=1 , \ldots, N. \end{cases} $$

Likewise, we define the stimulus vector \(\vec {{\mathcal {F}}}_{\alpha }=F_{{B}_{i}}\) if \(\alpha =1 , \ldots, N\) and \(\vec {{\mathcal {F}}}_{\alpha }=0\) otherwise. Then dynamical system (16) has the general form

$$ \frac{d \vec {{\mathcal {X}}}}{dt} = {\mathcal {H}}(\vec {{\mathcal {X}}}) + \vec {{\mathcal {F}}}(t), $$

where \({\mathcal {H}}(\vec {{\mathcal {X}}})\) is a nonlinear function, via the function \(R_{{B}_{i}} ( V_{{B}_{i}}, A_{{B}_{i}} )\) of Eq. (8), featuring gain control and low voltage threshold. The nonlinear problem can be simplified using the piecewise linear approximation (10). Indeed, there is a domain of \(\mathbb {R}^{3N}\)

$$ \Omega = \biggl\{ V_{{B}_{i}} \geq \theta _{B}, A_{{B}_{i}} \in \biggl[ 0,\frac{2}{3} \biggr], i=1 , \ldots, N \biggr\} , $$

where \(R_{{B}_{i}} ( V_{{B}_{i}}, A_{{B}_{i}} )=V_{{B}_{i}}\) so that (16) is linear and can be written in the form

$$ \frac{d \vec {{\mathcal {X}}}}{dt} = {\mathcal {L}}.\vec {{\mathcal {X}}}+ \vec {{\mathcal {F}}}(t), $$


$$ {\mathcal {L}}= \begin{pmatrix} -\frac{I_{N,N}}{\tau _{B}} &W^{{A}}_{{B}}& 0_{N,N} \\ W^{{B}}_{{A}} & -\frac{I_{N,N}}{\tau _{A}} & 0_{N,N} \\ h_{B} I_{N,N} & 0_{N,N} &-\frac{I_{N,N}}{\tau _{a}} \end{pmatrix}, $$

where \(I_{N,N}\) is the \(N \times N\) identity matrix and \(0_{N,N}\) is the \(N \times N\) zero matrix. This corresponds to intermediate activity, where neither BCells gain control (9) nor low threshold (5) are active. We first study this case and describe then what happens when trajectories of (32) get out of this domain, activating low voltage threshold or gain control.

The idea of using such a phase space decomposition with piecewise linear approximations has been used in a different context by Coombes et al. [23] and in [14, 15, 18].

We consider the evolution of the state vector \(\vec {{\mathcal {X}}}(t)\) from an initial time \(t_{0}\). Typically, \(t_{0}\) is a reference time where the network is at rest before the stimulus is applied. So, the initial condition \(\vec {{\mathcal {X}}}(t_{0})\) will be set to 0 without loss of generality.

Linear analysis

The general solution of (34) is

$$ \vec {{\mathcal {X}}}(t)= \int _{t_{0}}^{t} e^{{\mathcal {L}}(t-s)}.\vec {{\mathcal {F}}}(s) \,ds. $$

The behaviour of solution (36) depends on the eigenvalues \(\lambda _{\beta }, \beta =1 , \ldots, 3N\), of \({\mathcal {L}}\) and its eigenvectors \(\vec {{\mathcal {P}}}_{\beta }\) with entries \({\mathcal {P}}_{\alpha \beta }\). The matrix \({\mathcal {P}}\) transforms \({\mathcal {L}}\) in Jordan form (\({\mathcal {L}}\) is not diagonalizable when \(h_{B} \neq 0\), see Appendix D.1). Whatever the form of the connectivity matrices \(W^{{B}}_{{A}}\), \(W^{{A}}_{{B}}\), the N last eigenvalues are always \(\lambda _{\beta }=-\frac{1}{\tau _{a}}\), \(\beta =2N+1 , \ldots, 3N\).

In Appendix D.1 we show the following general result (not depending on the specific form of \(W^{{B}}_{{A}}\), \(W^{{A}}_{{B}}\), they just need to be square matrices and to be diagonalizable):

$$ {\mathcal {X}}_{\alpha }(t) = V_{\alpha _{\mathrm{drive}}}(t) + {\mathcal {E}}^{B}_{B, \alpha }(t)+{\mathcal {E}}^{B}_{A,\alpha }(t)+ {\mathcal {E}}^{B}_{a,\alpha }(t), \quad \alpha =1 , \ldots, 3N, $$

where drive term (1) is extended here to 3N-dimensions with \(V_{\alpha _{\mathrm{drive}}}(t)=0\) if \(\alpha > N\). The other terms have the following definition and meaning:

$$ {\mathcal {E}}^{B}_{B,\alpha }(t)=\sum _{\beta =1}^{N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds,\quad \alpha =1 , \ldots, N, $$

corresponds to the indirect effect, via the ACell connectivity, of the BCells drive on BCells voltages (i.e. the drive excites BCell i, which acts on BCell j via the ACells network);

$$ {\mathcal {E}}^{B}_{A,\alpha }(t)=\sum _{\beta =N+1}^{2N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds, \quad \alpha =N+1 , \ldots, 2N, $$

corresponds to the effect of BCell drive on ACell voltages, and

$$ \begin{aligned} {\mathcal {E}}^{B}_{a,\alpha }(t)={}& h_{B} \Biggl( \sum_{\beta =1}^{2N} \sum_{\gamma =1}^{N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{ \beta \gamma } \frac{\lambda _{\beta }+\frac{1}{\tau _{B}}}{\lambda _{\beta }+\frac{1}{\tau _{a}}} \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds \\ &{}+ \frac{-\frac{1}{\tau _{B}} + \frac{1}{\tau _{a}}}{\lambda _{\beta } +\frac{1}{\tau _{a}}} A^{0}_{\alpha -2N}(t) \Biggr),\quad \alpha =2N+1 , \ldots, 3N, \end{aligned} $$

corresponds to the effect of the BCells drive on the dynamics of BCell activity variables. The first term of (40) corresponds to the action of BCells and ACells on the activity of BCells via lateral connectivity. In the second term

$$ A^{0}_{\alpha -2N}(t)= \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} V_{\alpha -2N_{\mathrm{drive}}}(s) \,ds $$

corresponds to the direct effect of the BCell voltage with index \(\alpha -2N\) on its activity (see Eq. (7)).

To sum up, Eq. (37) describes the direct effect of a time-dependent stimulus (first term) and the indirect lateral network effects it induces. The term \({\mathcal {E}}^{B}_{a,\alpha }(t)\) is what activates the gain control. In the piecewise linear approximation (10), the BCell i triggers its gain control when its activity

$$ {\mathcal {E}}^{B}_{a,\alpha }(t) > \frac{2}{3},\quad \alpha =2N+i. $$

This relation extends the computation made in Sect. 3.1.1 for isolated BCells to the case of a BCell under the influence of ACells. On this basis, let us now discuss how the network effect influences the activation of gain control and, thereby, anticipation.

The structure of terms (38), (39) (40) is interpreted as follows. The drive (index \(\gamma =1 , \ldots, N\)) excites the eigenmodes \(\beta =1 , \ldots, 3N\) of \({\mathcal {L}}\) with a weight proportional to \({\mathcal {P}}^{-1}_{\beta \gamma }\). The mode β in turn excites the variable \(\alpha =1 , \ldots, 3N\) with a weight proportional to \({\mathcal {P}}_{\alpha \beta }\). The time dependence and the effect of the drive are controlled by the integral \(\int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds\). For example, when the stimulus has the Gaussian form (3) and cells are spaced with a distance δ so that cell γ is located at \(x=\gamma \delta \), we have, taking \(t_{0} \to -\infty \):

$$ \int _{-\infty }^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds= \frac{A_{0}}{v} e^{\frac{1}{2} \frac{\sigma ^{2} \lambda _{\beta }^{2}}{v^{2}}} e^{- \frac{\lambda _{\beta }}{v} ( \gamma \delta - v t )} \Pi \biggl[ \frac{\lambda _{\beta }\sigma }{v} - \frac{1}{\sigma } ( \gamma \delta - v t ) \biggr], $$

where \(\Pi (x)\) is the cumulative distribution function of the standard Gaussian probability (see definition, Eq. (60) in Appendix B). This is actually the same computation as (25) with \(\lambda _{\beta }=-\frac{1}{\tau _{a}}\). Equation (43) corresponds to a front, separating a region where \(\Pi [ \dots ]=0\) from a region where \(\Pi [ \dots ]=1\), propagating at speed v with an interface of width \(\frac{1}{\sigma }\) multiplied by an exponential factor \(e^{-\frac{\lambda _{\beta }}{v} ( \gamma \delta - v t )}\). Here, the sign of the real part of \(\lambda _{\beta }\), \(\lambda _{\beta ,r}\) is important. If \(\lambda _{\beta ,r} < 0\), the front has the shape depicted in Fig. 6(top). It decays exponentially fast as \(t \to +\infty \) with a time scale \(\frac{1}{ \vert \lambda _{\beta ,r} \vert }\). On the opposite, it increases exponentially fast with a time scale \(\frac{1}{\lambda _{\beta ,r}}\) as \(t \to +\infty \) when \(\lambda _{\beta ,r} > 0\), thereby enhancing the network effect and accelerating the activation of nonlinear effect (low threshold or gain control) leading the trajectory out of Ω. Remark that the peak of the drive occurs at \(\gamma \delta - v t=0\). The inflexion point of the function \(\Pi (x)\) is at \(x=0\). Thus, when \(\lambda _{\beta }< 0\), the front is a bit behind the drive, whereas it is a bit ahead when \(\lambda _{\beta }> 0\).

Figure 6

Front (43) for different values of \(\lambda _{\beta }\) (purple) as a function of time for the cell \(\gamma =0\). All figures are drawn with \(v=2\text{ mm/s}\); \(\sigma =0.2\text{ mm}\). Top. \(\lambda _{\beta }=-0.5~\text{ms}^{-1}\); Bottom. \(\lambda _{\beta }=0.5~\text{ms}^{-1}\)

Having unstable eigenvalues is not the only way to get out of Ω. Indeed, even if all eigenvalues are stable, the drive itself can lead some cells to get out of this set. When the trajectory of dynamical system (34) gets out of Ω, two cases are then possible:

  1. (i)

    Either a BCell i is such that \(V_{{B}_{i}} < \theta _{B}\). In this case, \(R_{{B}_{i}} ( V_{{B}_{i}}, A_{{B}_{i}} )=0\). Then, in the matrix \({\mathcal {L}}\), there is a line of zeros replacing the line i in the matrix \(W^{{B}}_{{A}}\), i.e. at the line \(i+N\) of \({\mathcal {L}}\). This corresponds to a stable eigenvalue \(-\frac{1}{\tau _{A}}\) for \({\mathcal {L}}\), controlling the exponential instability observed in Fig. 6(bottom). Thus, too low BCell voltages trigger a re-stabilisation of the dynamical system.

  2. (ii)

    There are BCells such that condition (42) holds, then gain control is activated and system (32) becomes nonlinear. Here, we get out of the linear analysis, and we have not been able to solve the problem mathematically. There is, however, a simple qualitative argument. If the cell i enters the gain control region, then the corresponding line \(i+N\) in the matrix \(W^{{B}}_{{A}}\) of \({\mathcal {L}}\) is replaced by \(W^{{B}_{i}}_{{A}} {\mathcal {G}}_{B} ( A_{{B}_{i}} )\), which rapidly decays to 0 (see, e.g. Fig. 4(e)). From the same argument as in (i), this generates a stable eigenvalue \({\sim}{-}\frac{1}{\tau _{A}}\) controlling as well the exponential instability.

Equation (37) features therefore the direct effect of the stimulus as well as the indirect effect via the amacrine network, corresponding to a weighted sum of propagating fronts, generated by the stimulus and influencing a given cell through the connectivity pathways. These fronts interfere either constructively, inducing a wave of activity enhancing the effect of the stimulus and, thereby, anticipation, or destructively somewhat lowering the stimulus effect. The fine tuning between “constructive” and “destructive” interferences depends on the connectivity matrix via the spectrum of \({\mathcal {L}}\) and its projection vectors \(\vec {{\mathcal {P}}}_{\beta }\). For example, complex eigenvalues introduce time oscillations which are likely to generate destructive interferences, unless some specific resonance conditions exist between the imaginary parts of the eigenvalues \(\lambda _{\beta }\). Such resonances are known to exist, e.g. in neural network models exhibiting a Ruelle–Takens transition to chaos [65], and they are closely related to the spectrum of the connectivity matrix [16]. Although we are not in this situation here, our linear analysis clearly shows the influence of the spectrum of \({\mathcal {L}}\), itself constrained by \({\mathcal {W}}\), on the network response to stimuli and anticipation.

Spectrum of \({\mathcal {L}}\)

This argumentation invites us to consider different situations where one can figure out how connectivity impacts the spectrum of \({\mathcal {L}}\) and thereby anticipation. We therefore provide some general results about the spectrum of \({\mathcal {L}}\) and potential linear instabilities before considering specific examples. These results are proved in Appendix D.2. As stated in Sect. 2.3.3, to go further in the analysis, we now assume that a BCell connects only one ACell, with a weight \(w^{+}\) uniform for all BCells, so that \(W^{{B}}_{{A}} = w^{+} I_{N,N}\), \(w^{+}>0\). We also assume that ACells connect to BCells with a connectivity matrix \({\mathcal {W}}\), not necessarily symmetric, with a uniform weight \(- w^{-}\), \(w^{-}>0\), so that \(W^{{A}}_{{B}} = -w^{-} {\mathcal {W}}\).

We denote by \(\kappa _{n}, n=1 , \ldots, N\), the eigenvalues of \({\mathcal {W}}\) ordered as \(\vert \kappa _{1} \vert \leq \vert \kappa _{2} \vert \leq \cdots \leq \vert \kappa _{n} \vert \), and \(\vec {\psi }_{n}\) is the corresponding eigenvector. We normalise \(\vec {\psi }_{n}\) so that \(\vec {\psi }_{n}^{\dagger }.\vec {\psi }_{n}=1\) where † is the adjoint. (Note that, as \({\mathcal {W}}\) is not symmetric in general, eigenvectors are complex). From the eigenvalues and eigenvectors of \({\mathcal {W}}\), one can compute the eigenvalues and eigenvectors of \({\mathcal {L}}\) (see Appendix D.2),and infer stability conditions for the linear system. The main conclusions are the following:

  1. 1.

    The stability of the linear system is controlled by the reduced, a-dimensional parameter:

    $$ \mu = w^{-} w^{+} \tau ^{2} \geq 0, $$


    $$ \frac{1}{\tau }=\frac{1}{\tau _{A}} - \frac{1}{\tau _{B}}, $$

    with a degenerate case when \(\tau _{A}=\tau _{B}\) considered in the Appendix.

  2. 2.

    If \({\mathcal {W}}\) is symmetric, its eigenvalues \(\kappa _{n}\) are real, but the eigenvalues of \({\mathcal {L}}\) can be real or complex. Each \(\kappa _{n}\) corresponds actually to eigenvalues \(\lambda _{n}^{\pm }\) of \({\mathcal {L}}\) (see Eq. (71)).

    1. (a)

      If \(\kappa _{n} < 0\), the two corresponding eigenvalues of \({\mathcal {L}}\) are real and one of the two corresponding eigenmodes of \({\mathcal {L}}\) becomes unstable when

      $$ w^{-} w^{+} > - \frac{1}{\tau _{A} \tau _{B}} \frac{1}{\kappa _{n}}. $$
    2. (b)

      If \(\kappa _{n} > 0\) and if \(\frac{1}{\tau } \neq 0\), the corresponding eigenvalues of \({\mathcal {L}}\) are complex conjugate if

      $$ \mu > \frac{1}{4 \kappa _{n}} \equiv \mu _{n,c}. $$

      The corresponding eigenmodes are always stable.

  3. 3.

    If \({\mathcal {W}}\) is asymmetric, eigenvalues \(\kappa _{n}\) are complex, \(\kappa _{n}=\kappa _{n,r} + i \kappa _{n,i}\). The eigenvalues of \({\mathcal {L}}\) have the form \(\lambda _{\beta }= \lambda _{\beta ,r} + i \lambda _{\beta ,i}\), \(\beta =1 , \ldots, 2N\), with

    $$ \textstyle\begin{cases} \lambda _{\beta ,r} = -\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \frac{1}{\sqrt{2}} \sqrt{a_{n}+u_{n}} ; \\ \lambda _{\beta ,i} = \pm \frac{1}{2 \tau } \frac{1}{\sqrt{2}} \sqrt{u_{n}-a_{n}}, \end{cases} $$

    where \(a_{n}=1- 4 \mu \kappa _{n,r}\) and \(u_{n}=\sqrt{ ( 1- 4 \mu \kappa _{n,r} )^{2} + 16 \mu ^{2} \kappa _{n,i}^{2}} =\sqrt{1-8 \mu \kappa _{n,r}^{2} + 16 \mu ^{2} \vert \kappa _{n} \vert ^{2}}\). Note that we recover the real case when \(\kappa _{n,i}=0\) by setting \(u_{n}=a_{n}\).

    Instability occurs if \(\lambda _{\beta ,r}>0\) for some β. This gives

    $$ a_{n}+u_{n} > 2 \frac{\tau ^{2}}{\tau _{AB}^{2}}, $$

    a condition on μ depending on \(\kappa _{n,r}\) and \(\kappa _{n,i}\).


The introduction of a dimensional parameter μ allows us to simplify the study of the joint influence of \(w^{-}\), \(w^{+}\), τ on dynamics because stability is controlled by μ only. In other words, a bifurcation condition of the form \(\mu =\mu _{c}\) signifies that this bifurcation holds when the parameters \(w^{-}\), \(w^{+}\), τ lay on the manifold defined by \(w^{-} w^{+} \tau ^{2}=\mu _{c}\).

We now show this in two examples of connectivity and afferent instabilities.

Nearest neighbours connectivity

Eigenmodes of the linear regime

We consider the case where the matrix \({\mathcal {W}}\), connecting ACells to BCells, is a matrix of nearest neighbours symmetric connections. In this case, \({\mathcal {W}}\) can be written in terms of the discrete Laplacian Δ on a d dimensional regular lattice, \(d=1, 2\), with lattice spacing \(\delta _{A}=\delta _{B}\) set here equal to 1 without loss of generality:

$$ {\mathcal {W}}=2 d I + \Delta . $$

Because of this relation, we will often use the terminology Laplacian connectivity for the nearest-neighbours connectivity. We also assume that dynamics holds on a square lattice with null boundary conditions. That is, ACell and BCells are located on d-dimensional grid with indices \(i_{x},i_{y}=0 , \ldots, L+1\) where the voltage and activity of cells with indices \(i_{x}=0\), \(i_{x}=L+1\), \(i_{y}=0\) or \(i_{y}=L+1\) vanish.

The eigenvalues and eigenvectors are explicitly known in this case. They are parametrized by a quantum number \(n=n_{x} \in \{ 1 , \ldots, L=N \} \) in one dimension and by two quantum numbers \(( n_{x},n_{y} ) \in \{ 1 , \ldots, L=N \} ^{2}\) in two dimensions. They define a wave vector \(\vec {k}_{n}= ( \frac{n_{x} \pi }{L+1},\frac{n_{y} \pi }{L+1} )\) corresponding to wave lengths \(( \frac{L+1}{ n_{x}},\frac{L+1}{ n_{y}} )\). Hence, the first eigenmode \((n_{x}=1,n_{y}=1)\) corresponds to the largest space scale (scale of the whole retina) with the smallest eigenvalue (in absolute value) \(s_{(1,1)}=2 ( \cos ( \frac{\pi }{L+1} )+\cos ( \frac{\pi }{L+1} )-2 )\). To each of these eigenmodes is related a characteristic time \(\tau _{n}=\frac{1}{\lambda _{n}}\). The slowest mode is the mode \(( 1,1 )\). In contrast, the fastest mode is the mode \((n_{x}=L,n_{y}=L)\) corresponding to the smallest space scale, the scale of the lattice spacing \(\delta =1\).

Eigenvalues \(\kappa _{n}\) can be positive or negative. Consider for example the one-dimensional case, where \(\kappa _{n}=2 \cos ( \frac{n \pi }{L+1} )\). We choose L even to avoid having a zero eigenvalue \(\kappa _{\frac{L}{2}}\). Eigenvalues \(\kappa _{n}\), \(n=1 , \ldots, \frac{L}{2}\), are positive, thus the corresponding eigenvalues \(\lambda _{n}^{\pm }\) of \({\mathcal {L}}\) are complex and stable. The modes with the largest space scale \(\frac{L}{n}\) are therefore stable for the linear dynamical system with oscillations. Eigenvalues \(\kappa _{n}\), \(n=\frac{L}{2}+1 , \ldots, L\), are negative, thus the corresponding eigenvalues \(\lambda _{n}^{\pm }\) of \({\mathcal {L}}\) are real. From (46) the mode n becomes unstable when

$$ w^{-} w^{+} > - \frac{1}{\tau _{A} \tau _{B}} \frac{1}{2 \cos ( \frac{n \pi }{L+1} )}. $$

Therefore, the first mode to become unstable is the mode L with the smallest space scale 1 (lattice spacing). For large L, this happens for \(w^{-} w^{+} \sim \frac{1}{2} \frac{1}{\tau _{A} \tau _{B}}\). This instability induces spatial oscillations at the scale of the lattice spacing. When \(w^{-} w^{+}\) further increases, the next modes become unstable. This instability results in a wave packet following the drive (as shown in Fig. 6). The width of this wave packet is controlled by the unstable modes and by nonlinear effects. We now illustrate the relations of these spectral properties with the mechanism of anticipation.

Numerical results

In all the following 1D simulations, we consider a bar with a width \(150~\mu \text{m}\), moving in one dimension at constant speed \(v=3~\text{mm/s}\). We simulate 100 BCells, 100 ACells and 100 GCells placed on a 1D horizontal grid with a uniform spacing of \(\delta _{b} = \delta _{a} = \delta _{g} = 30~\mu \text{m}\) between to consecutive cells. At time \(t=0\), the first cell lies at \(100~\mu \text{m}\) to the right of the leading edge of the moving bar. We set \(\tau _{B} = 300\text{ ms}\), \(\tau _{a} = 50\text{ ms}\), \(\tau _{A} = 100\text{ ms}\), corresponding to \(\tau =150\text{ ms}\) (Eq. (45)). We vary the value of weights \(w^{+}\), \(w^{-}\). For the sake of simplicity, we also choose \(w^{+} = -w^{-}=w\) to have only one control parameter. We investigate how the bipolar anticipation time \(\Delta _{B}\) and the maximum in the response \(R_{{B}}\) depend on w. This is summarised in Fig. 7(top), where we have shown the effect of gain control alone (blue horizontal line, independent of w), the effect of ACell lateral connectivity alone (red triangles) and the compound effect (white squares). Anticipation time is averaged over all cells. On the same figure (bottom) we see the responses of two neighbour cells lying at the centre of the lattice.

Figure 7

Anticipation in the Laplacian (nearest-neighbours) case. Top. Anticipation time and maximum bipolar response as a function of the connectivity weight w. The blue line corresponds to gain control alone (it does not depend on w). Red triangles correspond to the effect of lateral ACell connectivity without gain control. White squares correspond to the compound effect of ACell connectivity and gain control. The three regimes A, B, C are commented in the text. Bottom. Response curves of ACells and BCells corresponding to the three regimes: (A) \(w = 0.05~\text{ms}^{-1}\) with a small cross-inhibition, (B) \(w = 0.3~\text{ms}^{-1}\) with an opposition in activity between the blue (cell 50) and red cell (51), (C) \(w = 0.6~\text{ms}^{-1}\), where the red cell (51) is completely inhibited by cell 50

As w increases, we observe three areas of interest: the first (A) corresponds to a regime where ACell connectivity has a negative effect on anticipation, competing with gain control. As w is small, the anticipation is controlled by the direct pathway I, II of Fig. 1, from BCells to GCells, with a small inhibition coming from ACells, thereby decreasing the voltage of BCells and impairing the effect of gain control. This explains why the anticipation time in the case of lateral connectivity + gain control is smaller than the anticipation time of gain control alone. The network effect (red triangles) on anticipation time increases with w though. This corresponds to the “push–pull” effect already evoked in Sect. 2.3. When a BCell \({B}_{i}\) feels the stimulus, its activity increases favoured by the stimulus, it increases the voltage of the connected ACell, inhibiting the next BCell \({B}_{i+1}\), thereby inducing a feedback loop, the push–pull effect, enhancing the voltage of \({B}_{i}\).

In zone (B) the push–pull effect becomes more efficient than gain control alone. In this region, the voltage of the BCell feeling the bar increases fast, while the voltage of its neighbours becomes more and more negative, enhancing the feedback loop. This holds until the voltage rectification (5) takes place. This is the time when the dynamical system gets out of Ω. The push–pull effect then saturates and \(V_{{B}_{i}}\) reaches a maximum, corresponding to a peak in activity. This peak is reached faster than the peak in the function \({\mathcal {G}}_{B}(A)\). Thus, the peak of \(R_{{B}_{i}}(t)\) occurs at the same time as the peak of \({\mathcal {N}}_{B}(V_{{B}_{i}}(t))\) and, thus, before the reference peak (time \(t_{B}\) for isolated BCells defined in Sect. 3.1.1). In other words, the ACell lateral connectivity allows the BCell to outperform the gain control mechanism for anticipation. As w increases in zone B the push–pull effect (averaged over BCells) reaches a maximum, then decreases. This is because the increase in w makes the inhibitory effect of ACells stronger and stronger on silent BCells which then remain silent longer and longer because the ACell voltage increases with w, and it takes longer for it to decrease and de-inhibit the neighbours. The silent cells are less and less sensitive to the stimulus, being strongly and durably inhibited.

In region C, the anticipation is again dominated by gain control. In this case, the effect on cells depends on the parity of their index. The response of BCells is either completely suppressed or identical to the response of the reference case (with gain control alone). This is why the average anticipation time with gain control is about half of the gain control without network effect. Cells that are inhibited do no participate to anticipation, and the others anticipate in the same way than with gain control alone. Note that this “parity” effect is due to the nearest neighbours connectivity and the symmetry of interactions.

We now interpret and complete these results from the point of view of the spectrum of \({\mathcal {L}}\) and associated dynamics. The fastest mode to destabilise corresponds to the smallest space scale, i.e. the lattice spacing. This is a mode with alternate sign at the scale of the lattice. We call it the “push–pull” mode, as it is precisely what makes the push–pull effect. When the push–pull mode becomes unstable, the excited BCell becomes more and more excited and the next BCell more and more inhibited. However, the time it takes \(\tau _{L}\) has to be compared to the time where the bar stays in the RF, \(\tau _{\mathrm{bar}}\) (and more generally the time it takes to RF kernel to respond to the bar). In the case of the simulation \(\sigma _{\mathrm{center}} = 90~\mu \text{m}\) (see Appendix A, Table 1) and \(v=3~\mu \text{m/ms}\) giving a characteristic time \(\tau _{\mathrm{bar}}=270 \mathrm{ ms}\), whereas, as we observed, \(\tau _{L} < 100\text{ ms}\). The push–pull mode is therefore quite faster than \(\tau _{\mathrm{bar}}\), so the push–pull effect takes place fast and leads to a fast exponential increase of the front depicted in Fig. 6(right). This explains the rapid increase of network anticipation effect observed in regions A, B of Fig. 7.

Random connectivity

In this section, we study the behaviour of the model using the more realistic, probabilistic type of connectivity presented in Sect. 2.3.3 and more thoroughly studied in Appendix C. Within this framework, a given ACell \({A}_{i}\) receives the upstream activity from the BCell lying at the same position \({B}_{i}\) with a constant weight w. The same ACell inhibits BCells with which it is coupled through the random adjacency matrix \({\mathcal {W}}\), generated by the probabilistic model of connectivity, and the weight matrix \(W^{{B}}_{{A}}= -w {\mathcal {W}}\). We recall that the connectivity depends on a scale parameter ξ for the branch length) and the mean and variance , σ for the distribution of the number of branches. These parameters can be found in Table 1 in Appendix A.

Eigenmodes of the linear regime

Similarly to Sect. 3.2.2 we now analyse the spectrum of \({\mathcal {L}}\) when \({\mathcal {W}}\) is a random connectivity matrix. Although a couple of results can be established (using the Perron–Frobenius theorem), we have not been able to find general mathematical results on the spectrum or eigenvectors of this family of random matrices. We thus performed numerical simulations.

The spectrum of \({\mathcal {L}}\) is deduced from the spectrum of \({\mathcal {W}}\) as exposed above. The spectrum of \({\mathcal {W}}\) depends on , σ and ξ. In Fig. 8 we have plotted, on the left, an example of such a spectrum. This is the spectral density (distributions of eigenvalues in the complex plane) obtained from the diagonalization of \(10\text{,}000\) matrices \(100 \times 100\) (so the statistics is made over 106 eigenvalues). We note that the largest eigenvalues are always real positive, a straightforward consequence of the Perron–Frobenius theorem [38, 69]. More generally, we observe an over-density of real eigenvalues. The same holds for random Gaussian matrices with independent entries \({\mathcal {N}}(0,\frac{1}{N})\) [32] whose asymptotic density converges to the circular law [40]. The shape of the spectral density in our model differs from the circular law though, and it depends on the parameters , σ and ξ.

Figure 8

Spectral density of eigenvalues for \(\xi =2\), \(\bar{n}=4\), \(\sigma _{n}=1\). Top left. For the matrix \({\mathcal {W}}\) (density estimated over \(10\text{,}000\) samples), density is represented in colour plots, in log scale. The colour bar refers to powers of 10. Top right. Spectral density of \({\mathcal {L}}\) for \(w=0.05\). Bottom left. Spectral density of \({\mathcal {L}}\) for \(w=0.1\). Bottom right. Spectral density of \({\mathcal {L}}\) for \(w=0.15\). Unstable eigenvalues are on the right to the vertical dashed line \(x=0\)

On the same figure we show the corresponding spectral density of \({\mathcal {L}}\) obtained from Eq. (48) for \(w=0.05,0.1,015\). We have taken here \(\tau _{A}=30\), \(\tau _{B}=10\text{ ms}\) to see better the transitions with w (level lines in Fig. 9). There is an evident symmetry with respect to \(\frac{1}{\tau _{AB}} = -0.066\) expected from the mathematical analysis. We see that the largest eigenvalue is real (although it is not necessarily related to the largest eigenvalue of \({\mathcal {W}}\)). We also see that, as w increases, a large number of (complex) eigenvalues become unstable. There is actually a frontier of instability that we have plotted in the plane w, ξ for different values of . This is shown in Fig. 9 (dashed line). The level line 0 is the frontier of instability of the linear dynamical system. This frontier has the (empirical) form \((\xi -\xi _{0}).(w-w0)=c\), where c has the dimension of a characteristic speed.

Figure 9

Heat map for the largest real part eigenvalue in the plane w, ξ for different values of . Left. \(\bar{n}=1\). Right. \(\bar{n}=4\). Colour lines are level lines. The level line 0 is the frontier of instability of the linear dynamical system

What matters here is that there are complex unstable eigenvalues with no specific resonance relations between them. They are therefore prone to generate destructive interferences in (37).

Numerical results

In Fig. 10 we consider, similarly to Fig. 7 for Laplacian connectivity, the effect of random connectivity on anticipation, compared to pure gain control mechanism. In contrast to the Laplacian case, we have here more parameters to handle: ξ, which controls the characteristic length of branches and , \(\sigma _{n}\) which control the number of branches distribution. We present here a few results where ξ varies, whereas the average number of branches \(\bar{n}=2\) (\(\sigma _{n}=1\)). A more systematic study is done in [74]. The interest of varying ξ is to start from a situation which is close to the Laplacian case (characteristic distance \(\xi =1\)) and to increase ξ to see how the size of the dendritic tree of ACells may impact anticipation. This is a preliminary step toward considering different physiological ACells type (e.g. narrow-, medium-, or wide-field [29]). Note, however, that the probability of connection given the distance of cells (fixed by , \(\sigma _{n}\)) implicitly impacts w and the anticipation effects.

Figure 10

Average anticipation in the random connectivity case. Top. Bipolar anticipation time and maximum in the response \(R_{{B}}\) as a function of the connectivity weight w in the case of a random connectivity graph with \(\xi = 1\), \(\bar{n}=2\) and \(\sigma _{n}=1\). Bottom. Response curves of ACells and BCells \(51-52\) corresponding to the three regimes: (A) \(w = 0.05~\text{ms}^{-1}\), (B) \(w = 0.3~\text{ms}^{-1}\), (C) \(w = 0.6~\text{ms}^{-1}\)

The main difference with the Laplacian case is the asymmetry of connections. Here, symmetry means that if ACell j connects the BCell i, then the ACell i connects the BCell j too. This does not necessarily hold for random connectivity, and this has a strong impact on the push–pull effect and anticipation. So, even if the connectivity is short-range when ξ is small, mainly connecting nearest neighbours, we observe already a big difference with the Laplacian case. This is shown in Fig. 10, where \(\xi =1\). Similarly to the Laplacian case, we observe three main regions depending on w. To have the same representation as Fig. 7, we present \(V_{{A}}\), \({\mathcal {N}}_{{B}}\), \(R_{{B}}\) for two connected cells (here, ACell 51 and BCell 52). However, in this case, connection is not symmetric: ACell 51 inhibits BCell 52, but ACell 52 does not inhibit BCell 51.

We observe three regimes, as in the Laplacian case. In the first region (A) ACell random connectivity has a negative effect on anticipation, as compared to gain control alone. However, since in this case the “push–pull” effect is not evoked, this decay simply comes from the fact that BCell 52 receives an inhibition for ACell 51 that reduces the effect of gain control. This inhibition, however, is not strong enough to significantly shift the peak response as in region (B).

Indeed, in region (B), the inhibition of BCell 52 is strong enough to outperform the effect of gain control. In this case, and similarly to the Laplacian case, the peak of \(R_{{B}_{i}}(t)\) occurs at the same time as the peak of \({\mathcal {N}}_{B}(V_{{B}_{i}}(t))\) and before the reference peak. However, this effect is not consistent over all cells and only occurs for BCells that receive active inhibition. This explains why the performance of the Laplacian connectivity is better, on average, in this region.

Finally, as w grows higher, the inhibition grows stronger, completely inhibiting BCell 51. Cells that do not receive any inhibition, as BCell 49 in this example, keep a response that is identical to the response without ACell connectivity. The fraction of cells receiving inhibition in this case being quite small (about 15), this explains why the stationary value of anticipation is fairly close to the value with gain control alone.

The role of the characteristic distance

In Fig. 11, we analyse the effect of the characteristic length ξ on anticipation. On the top of the figure we represent the joint effect of the random ACell connectivity and gain control on anticipation for three values of ξ. At the bottom we represent the only effect of the random ACell connectivity for the same values of ξ. We observe that performance in anticipation decreases with ξ. More precisely, we observe an anticipatory effect in this case, as shown in Fig. 11(bottom), but this effect is not able to compete with gain control alone. Even worse, the compound effect shown in 11(top) is disastrous since increasing w renders the anticipation time smaller and smaller.

Figure 11

Role of the characteristic branch length ξ on anticipation. Top. The joint effect of the random ACell connectivity and gain control on anticipation for \(\xi =1,2,3\). Left. Average bipolar anticipation time. Right. Maximum value of the bipolar response \(R_{{B}}\). Bottom. The single effect of the random ACell connectivity on anticipation with the same representation

This spurious effect can be interpreted through the analysis made in Sect. 3.2.1, Eq. (37). From the spectrum of \({\mathcal {L}}\), we see that there are unstable complex eigenvalues whose number increases with w. These eigenvalues are prone to generate destructive interferences, especially when their number becomes large as w increases, explaining the small peak in region B. The consequence on cells activity and gain control can be dramatic as seen in the red trace of Fig. 10(B bottom), line \(R_{{B}}\). This depends on the precise connectivity pattern when long range connections from ACells to BCells induce a desensitisation of BCells, which is not counterbalanced by the push–pull effect as in the Laplacian connectivity case.


The two numerical examples considered in this section emphasise the role of symmetry in the synapses and, more generally, the role of complex versus real eigenvalues in the spectrum of \({\mathcal {L}}\). Recall that, from Sect. 3.2.1, if \({\mathcal {W}}\) is symmetric, complex eigenvalues are always stable, so, for the type of architecture considered here, unstable destructive interferences only occur when \({\mathcal {W}}\) is asymmetric. This leads to several questions, potential subjects for further studies.

  1. 1.

    How much does anticipation depend on the degree of asymmetry in the matrix \({\mathcal {W}}\)? The way we generate the random connectivity in the model does not allow us to tune the degree of asymmetry (i.e. the probability that a connection \({A}_{j} \to {B}_{i}\) exists simultaneously with a connection \({A}_{i} \to {B}_{j}\)). Therefore, one has to find a different way to generate the connectivity. From the mathematical analysis made in Appendix C, a distribution depending exponentially on the distance, with a tunable probability to have a symmetric connection, could be appropriate. We do not know about any experimental results characterising this degree of symmetry of the connections in the retina. On mathematical grounds, and from the analogy of the spectrum of \({\mathcal {W}}\) with a circular law, one could expect the spectrum of \({\mathcal {W}}\) to become more and more elongated on the real axis as the degree of symmetry increases in an elliptic like law [55].

  2. 2.

    Nonlinear effects. The destructive interference effect in our model is partly due to the linear nature of the ACell dynamics. In nonlinear dynamics, eigenvalues of the evolution operator can display resonance conditions favouring constructive interferences. On biological grounds, it is for example known that starburst amacrine cells display periodic bursting activity during development, disappearing a few days after birth [94]. Bursting and its disappearance can be understood in the framework of bifurcation theory of a nonlinear dynamical system featuring these cells [50]. In this setting, even if they are not bursting in the mature stage, SACs remain sensitive to specific stimulation that can temporally synchronise them, thereby enhancing the network effect with a potential effect on anticipation.

The potential role of gap junctions on anticipation

In this section, we study the network ability to improve anticipation in the presence of gap junctions coupling, as in Eq. (23), and gain control at the level of GCells.

We start first with mathematical results and show then simulation results.

Mathematical study

We use a continuous space limit for a one-dimensional lattice. The extension to two dimensions is straightforward. Here, x corresponds to the preferred direction of the direction sensitive cells. We consider a continuous spatio-temporal field \(V(x,t)\), \(x \in \mathbb {R}\), such that \(V_{{G}_{k}} \equiv V(k \delta _{G},t)\). We assume likewise that \(V_{{G}_{k}}^{(P)} \equiv V^{(P)} (k \delta _{G},t)\) for some continuous function \(V^{(P)}(x,t)\) corresponding to the GCells bipolar pooling input (17), and we take the limit \(\delta _{G} \to 0\). In this limit Eq. (23) becomes

$$ \frac{\partial V_{{G}}}{\partial t} = f(x,t) - v_{\mathrm{gap}} \frac{\partial V_{{G}}}{\partial x} + O\bigl(\delta _{G}^{2}\bigr), $$

where \(v_{\mathrm{gap}} \equiv w_{\mathrm{gap}} \delta _{G}\) has the dimension of a speed and \(\frac{\partial V^{(P)}(x,t)}{\partial t} \equiv f(x,t)\). Finally, we denote by \(C(x)\) the initial profile so that \(V(x,t_{0})=C(x)\).


Neglecting terms of order \(\delta _{G}^{2}\), the general solution of (52) is

$$ V_{G}(x,t)=C\bigl(x-v_{\mathrm{gap}}(t-t_{0}) \bigr) + \int _{t_{0}}^{t} f\bigl(x-v_{\mathrm{gap}}(t-u),u \bigr) \,du. $$

Equation (52) is a transport equation of ballistic type [74]. For example, if we consider a stimulation of the form \(V^{(P)}(x,t)=h(x-v t)\), where h is a Gaussian pulse of the form (3), propagating with speed v, and an initial profile \(C(x)= h ( x-v t_{0} )\), the voltage of GCells obeys

$$ V_{G}(x,t) = \underbrace{\frac{v}{v-v_{\mathrm{gap}}} h ( x-vt )}_{\pi _{\mathrm{stim}}} - \underbrace{\frac{v_{\mathrm{gap}}}{v-v_{\mathrm{gap}}} h \bigl( x-v_{\mathrm{gap}}t-(v-v_{\mathrm{gap}}) t_{0} \bigr)}_{ \pi _{\mathrm{gap}}}. $$

When \(v_{\mathrm{gap}}=0\), the GCell voltage follows the stimulation, i.e. \(V_{G}(x,t)=h ( x-vt )\). In the presence of gap junctions, there are two pulses: the first one \(\pi _{\mathrm{stim}}\) with amplitude \(\frac{v}{v-v_{\mathrm{gap}}}\) propagating at speed v and following the stimulation; the second one \(\pi _{\mathrm{gap}}\) with amplitude \(-\frac{v_{\mathrm{gap}}}{v-v_{\mathrm{gap}}}\) propagating at speed \(v_{\mathrm{gap}}\).

We have the following cases (we take \(t_{0}=0\) for simplicity). An illustration is given in Fig. 12.

  1. 1.

    If v and \(v_{\mathrm{gap}}\) have the same sign:

    1. (A)

      If \(v_{\mathrm{gap}} < v\), the front \(\pi _{\mathrm{stim}}\) is amplified by a factor \(\frac{v}{v-v_{\mathrm{gap}}}\), whereas there is a refractory front \(\pi _{\mathrm{gap}}\), proportional to \(v_{\mathrm{gap}}\), behind the excitatory pulse.

    2. (B)

      If \(v=v_{\mathrm{gap}}\), \(V_{G}(x,t) =h ( x-v_{\mathrm{gap}}t )+v_{\mathrm{gap}}(t-t_{0}) h' ( x-v_{\mathrm{gap}}t )\) which diverges like t when \(t \to \infty \) and \(x \to + \infty \). This divergence is a consequence of the limit \(\delta _{G} \to 0\) in (52).

    3. (C)

      If \(v_{\mathrm{gap}} > v\), the amplitude of \(\pi _{\mathrm{stim}}\) follows the stimulation with a negative sign (hyper polarization), whereas \(\pi _{\mathrm{gap}}\) is ahead of the stimulation, with a positive sign, travelling at speed \(v_{\mathrm{gap}}\).

  2. 2.

    If v and \(v_{\mathrm{gap}}\) have the opposite sign, we set \(v=-\alpha v_{\mathrm{gap}}\) with \(\alpha >0\). Then the front \(\pi _{\mathrm{stim}}\) follows the stimulus but is attenuated by a factor \(\frac{\alpha }{1+\alpha }\). The front \(\pi _{\mathrm{gap}}\) propagates in the opposite direction with an attenuated amplitude \(\frac{1}{1+\alpha }\).

This shows that these gap junctions favour the response to motion in the preferred direction and attenuate the motion in the opposite direction although the attenuation is weak. The effect is reinforced by gain control [74]. The most interesting case is 1 c where these gap junctions can induce a wave of activation ahead of the stimulation.

Figure 12

Anticipation for non symmetric gap junctions. Top. GCell anticipation time and maximum firing rate as a function of the gap junction velocity \(v_{\mathrm{gap}}\). Bottom: response curves of GCells corresponding to the three regimes: (A) \(v_{\mathrm{gap}} = 0.6~\text{mm/s}\), (B) \(v_{\mathrm{gap}} = 3\) \(\mathrm{mm}/s\), (C) \(v_{\mathrm{gap}} = 12~\text{mm/s}\). The curves display 3 main regimes (see text): In (A) \(v_{\mathrm{gap}} < v\) and the positive front propagates at the same speed as the pooling voltage triggered by the stimulus; In (B), \(v_{\mathrm{gap}} = v\), the positive front and the negative fronts both propagate at the speed \(v_{\mathrm{gap}}\) and the amplitude of the positive front (\(V_{{G}}(t)\)) increases with t; In (C), \(v_{\mathrm{gap}} > v\) and the positive front propagates faster than the stimulus so that the peak of activity arises earlier. The negative front propagates at the stimulus speed

Effect of gain control

When the low voltage threshold \({\mathcal {N}}_{G}\) (19) and the gain control \({\mathcal {G}}_{G}(A)\) (21) are applied to \(V_{G}(x,t)\), there are two effects: (i) the hyperpolarised front is cut by \({\mathcal {N}}_{G}\); (ii) the positive pulse induces a raise in activity, which in turn triggers the ganglion gain control \({\mathcal {G}}_{G}(A)\) inducing an anticipated peak in the response of the GCell, similar to what happens with BCells, with a different form for the GCell gain control though. Moreover, in contrast to pathway II of Fig. 1, where only gain control generates anticipation, in pathway IV the wave of activity generated by gap junctions increases anticipation by two distinct effects. If \(v_{\mathrm{gap}} < v\), the cell’s response propagates at the same speed as the stimulus, but its amplitude is larger than the case with no gap junction (term \(\pi _{\mathrm{stim}}\)). From Eq. (53) this results in an increase of \(h_{B}\) to an effective value \(h_{B} \frac{v}{v-v_{\mathrm{gap}}}\) inducing an improvement in the anticipation time (with a saturation of the effect, though, as \(v_{\mathrm{gap}} \to v\)). If \(v_{\mathrm{gap}} > v\), the cell’s response propagates at a larger speed than the stimulus (term \(\pi _{\mathrm{gap}}\)), so that the cell responds before the time of response without gaps. This induces as well an increase in the anticipation time.

Numerical illustrations

We consider a bar with a width \(200~\mu \text{m}\) moving in one dimension at constant speed \(v=3~\text{mm/s}\). We simulate here 100 GCells, placed on a 1D horizontal grid, with a spacing of \(30~\mu \text{m}\) between two consecutive cells. At time \(t=0\), the first cell lies at \(100~\mu \text{m}\) from the leading edge of the moving bar.

We investigate how the GCell anticipation time and GCell firing rate depend on \(v_{\mathrm{gap}}\) in Fig. 12. The top shows the effect of gain control alone (blue horizontal line independent of \(v_{\mathrm{gap}}\)), the effect of the asymmetric gap junction connectivity alone (red triangles) and the compound effect (white squares). Anticipation time is averaged over all GCells. In the bottom part of the figure, we show the responses of two GCells of indices 30 and 60, spaced by \(900~\mu \text{m}\).

As explained in Sect. 3.3.1, we observe the three regimes A, B, C mathematically anticipated above. Note that, for these parameter values, the negative trailing front predicted in A is not visible.

Symmetric gap junctions

The asymmetry observed by Trenholm et al. is due to the specific structure of the direction selective GCell dendritic tree [80]. However, in general gap junctions connectivity is expected to be symmetric. So, to be complete, we consider here the effect of symmetric gap junctions on anticipation. It is not difficult to derive the equivalent of Eq. (52) in this case too. This is a diffusion equation of the form

$$ \frac{\partial V_{{G}}}{\partial t} = f(x,t) + D_{\mathrm{gap}} \Delta V_{{G}} + O\bigl(\delta _{G}^{4}\bigr), $$

where \(D_{\mathrm{gap}}=w_{\mathrm{gap}} \delta G^{2}\) is the diffusion coefficient and Δ is the Laplacian operator.

The response to a Gaussian stimulus of the form (3) reads as follows:

$$ V_{{G}}(x,y,t)= [ H\stackrel {x,y,t}{\ast }f ], $$


$$ H(x,y,t)= \frac{e^{-\frac{x^{2}+y^{2}}{4 D_{\mathrm{gap}} t }}}{4 \pi D_{\mathrm{gap}} t} $$

which is the heat equation diffusion kernel.

Recall that \(f(x,t) \equiv \frac{\partial V^{(P)}(x,t)}{\partial t}\). So, if \(V^{(P)}(x,t)=h(x-v t)\), where h is a Gaussian pulse of the form (3) propagating with speed v, f is a bimodal function of the form \(\frac{v}{\sigma ^{2}} h(x-v t) \times (x-v t)\), the shape of which can be seen in Fig. 13(bottom), second row. The convolution with the heat kernel leads to a front propagating at the same rate as the stimulus, with a diffusive spreading whose rate is controlled by \(D_{\mathrm{gap}}\). In particular, there is positive bump ahead of the motion, which can induce anticipation, as shown in Fig. 13(top). The effect is weak though, essentially because the diffusive spreading makes the amplitude of the response decrease fast as a function of \(D_{\mathrm{gap}}\).

Figure 13

Anticipation for symmetric gap junctions. Top. GCell anticipation time and maximum firing rate as a function of the gap junction velocity \(v_{\mathrm{gap}}\). Bottom. Response curves of GCells corresponding for three values of \(v_{\mathrm{gap}}\): (A) \(v_{\mathrm{gap}} = 0.6~\text{mm/s}\), (B) \(v_{\mathrm{gap}} = 3~\text{mm/s}\), (C) \(v_{\mathrm{gap}} = 12~\text{mm/s}\). For consistency, we have kept the same values as in the asymmetric case. Here, anticipation time grows continuously until saturation, while the maximum firing decreases like a power law as a function of \(v_{\mathrm{gap}}\)

Although this positive front, for small \(D_{\mathrm{gap}}\), increases a bit the anticipation time by accelerating the gain control triggering, rapidly the peak in the response \(R_{{B}}\) is led by the voltage peak corresponding to the positive bump with a low voltage. The position of this peak is, roughly, at a distance \(\sigma =\sqrt{\sigma _{\mathrm{center}}^{2}+\sigma _{B}^{2}}\) from the peak of the Gaussian pool, where \(\sigma _{\mathrm{center}}\) is the width of the centre RF and \(\sigma _{B}\) the width of the bar. This corresponds to a time \(\frac{\sigma }{v}\) ahead of the peak in the drive, fixing a maximal value to the anticipation time (see the saturation of the anticipation time curve in Fig. 13(top, left)). In our case, \(\sigma \sim 134\) given a saturation peak at \(\frac{134~\mu \text{m}}{3~\mu \text{m/ms}}=44.84~\mathrm{ms}\). A consequence of the voltage decay is the corresponding power law (\(\frac{1}{\sqrt{D_{\mathrm{gap}}}}\) for large \(D_{\mathrm{gap}}\)) decay of the firing rate (Fig. 13(top, right)).

To conclude, the situation with symmetric gap junctions is in high contrast with direction selective gap junctions where the response to stimuli was ballistic and was not decreasing with time. On this basis we consider that, for symmetric gap junctions, the anticipation effect is irrelevant, especially taking into account the smallness of the voltage response in case C.

Numerical results

We investigate in this section how the GCell anticipation time and GCell firing rate depend on the gap junction conductance in the case of symmetric gap junctions. In Fig. 13(top), we use the same representation as in Fig. 12. For consistency with the direction sensitive case, we choose \(v_{\mathrm{gap}}=\frac{D_{\mathrm{gap}}}{\delta G}\) as a control parameter. We also take (A) \(v_{\mathrm{gap}} = 0.6~\text{mm/s}\), (B) \(v_{\mathrm{gap}} = 3~\text{mm/s}\), (C) \(v_{\mathrm{gap}} = 12~\text{mm/s}\) in Fig. 13(bottom). This corresponds to a diffusion coefficient (A) \(D_{\mathrm{gap}} = 18 \times 10^{-3}~\text{mm}^{2}\text{/s}\), (B) \(D_{\mathrm{gap}} = 90 \times 10^{-3}~\text{mm}^{2}\text{/s}\), (C) \(D_{\mathrm{gap}} = 360 \times 10^{-3}~\text{mm}^{2}\text{/s}\).


In this section we have shown how gap junctions direction sensitive cells can display anticipation due to the propagation of a wave of activity ahead of the stimulus. This effect is negligible for symmetric gap junctions. Note that symmetric gap junctions are known to favour waves propagation, for example in the early development (stage I, see [49] and the reference therein for a recent numerical investigation). Here, gap junctions are considered in a different context due to the presence of a nonstationary stimulus triggering the wave.

Let us now comment our computational result. How does it fit to biological reality? Depending on the gap-junction conductance value, the propagation patterns we predict are quite different.

What is the typical value of \(v_{\mathrm{gap}}\) in biology? It is difficult to make an estimate from the expression \(v_{\mathrm{gap}}=\frac{g_{\mathrm{gap}} \delta _{G}}{C}\). The membrane capacity C and gap junctions conductance can be obtained from the literature (for connexins Cx36, \(g_{\mathrm{gap}} \sim 10\mbox{--}15~\mbox{pS}\) [75]), but the distance \(\delta _{G}\) is more difficult to evaluate. In the model, this is the average distance between GCells’ soma which corresponds to \({\sim} 200\mbox{--}300~\mu\text{m}\). But in the computation with gap junctions what matters is the length of a connexin channel which is quite smaller. Taking \(\delta _{G}\) as the distance between GCells assumes a propagation speed between somas at the speed of a connexin, which is wrong because most of the speed is constrained by the propagation of action potential along the dendritic tree. So we used a phenomenological argument (we thank O. Marre for pointing it out to us). The correlation of spiking activity between GCell neighbours is about \(2\mbox{--}5~\mathrm{ms}\) for cells separated by \({\sim}200\mbox{--}300~\mu \text{m}\) [86]. This gives a speed \(v_{\mathrm{gap}}\) in the interval \([ 40, 150 ]~\text{mm/s}\), which is quite fast compared to the bar speed in experiments.

So we are in case 1 b, and should one observe an experimental effect? To the best of our knowledge, an effect of DSGC gap junctions on motion anticipation has not been observed. But we do not know about experiments targeting precisely this effect. It would be interesting to block gap junctions and address Berry et al. [9] or Chen et al. [20] experiments in this case. The difficulty is that blocking gap junctions blocks many essential retinal pathways. We do not pursue this discussion further concluding that our model proposes a computational prediction that could be interesting to be experimentally investigated.

Response to two-dimensional stimuli

In this section, we present some examples of retinal responses and anticipation to trajectories more complex than a bar moving in one dimension with a uniform speed. The aim here is not to do an exhaustive study but, instead, to assess qualitatively some anticipatory effects not considered in the previous sections.

Flash lag effect

The flash lag effect is an optical illusion where a bar moving along a smooth trajectory and a flashed bar are presented to the subject and are perceived with a spatial displacement, while they are actually aligned. A variation of this illusion consists of a bar moving in rotation, a bar flashed in angular alignment, giving rise to a perceived angular discrepancy. We have investigated this effect in our model in the presence of the different anticipatory effects considered in the paper.

Figure 14 shows the response to a bar moving with a smooth motion, while a second bar is flashed in alignment with the first bar at one time frame. The first line shows the stimuli, consisting of 130 frames, of a bar moving at 2.7 mm/s, with a refreshment rate of 100 Hz. The second line shows the GCell response with gain control, and the third line presents the effect of lateral amacrine connectivity in the case of a Laplacian graph. Keeping the same values of parameters as in the 1D case, we set \(w = 0.3~\mathrm{ms}^{-1}\) corresponding to case B in Fig. 7. Finally, the last line shows the effect of asymmetric gap junctions, having a preferred orientation in the direction of motion, with \(v_{\mathrm{gap}} = 9~\text{mm/s}\).

Figure 14

Flash lag effect with different anticipatory mechanisms. (A) Response to a flash lag stimulus: a bar moving in smooth motion with a second bar flashed in alignment with the first bar for one time frame. The first line shows the stimulus, the second line shows the GCells response with gain control, the third line presents the effect of lateral ACell Laplacian connectivity with \(w = 0.3~\text{ms}^{-1}\), and the last line shows the effect of asymmetric gap junctions with \(v_{\mathrm{gap}} = 9~\text{mm/s}\). (B) Time course response of (top) a cell responding to the flashed bar and (bottom) a cell responding to the moving bar. Dashed lines indicate the peak of each curve

In the case of the gain control response, the peak of response to the moving bar is shifted by about \(10~\text{ms}\) in the direction of motion, as compared to the static bar. The flashed bar elicits a lower response, given its very short appearance in comparison with the characteristic time of adaptation. We choose this time short enough to avoid gain control triggering, explaining the difference with the strong response observed by Chen et al. [20] in the presence of a still bar.

In the case of amacrine connectivity, the moving bar representation is shrunk as compared to the gain control case given the prevalence of inhibition, while the level of activity for the flashed bar remains roughly the same. In this case, cells responding to the moving bar reach their peak activity slightly earlier (about 19 ms for these parameters value) than in the gain control case (Fig. 14(B, top)).

Finally, asymmetric gap junction connectivity displays a wave propagating ahead of the bar, increasing the central blob, which is much larger than the size of the bar in the stimulus, while the flashed bar activity remains similar to the previous cases.

Parabolic trajectory

In this subsection, we assess the effect of the three anticipatory mechanisms on a parabolic trajectory. The interest is to have a trajectory with a change in direction and speed, thus an acceleration. The stimulus consists of 20 frames displayed at 10 Hz. The simulations parameters and connectivity weights are the same as the ones used in the previous section. Figure 15 shows the response to a dot moving along a parabolic trajectory.

Figure 15

Effect of anticipatory mechanisms on a parabolic trajectory. (A) Response to a dot moving along a parabolic trajectory. The first line shows the stimulus, the second line shows the GCell response with gain control, the third line presents the effect of lateral ACell connectivity with \(w = 0.3~\text{ms}^{-1}\), and the last line shows the effect of asymmetric gap junctions with \(v_{\mathrm{gap}} = 9~\text{mm/s}\). (B) Time course response of a cell responding to the dot near the trajectory turning point. Linear response corresponds to the response to the stimulus without gain control

In the case of gain control, GCell response is more elongated, which has a distortion effect on the dot representation near the turning point of the trajectory (\(1400\text{--}1600~\text{ms}\)). Cells responding near the trajectory turning point are still anticipating motion, as the peak response of the gain control curve is slightly shifted to the left, compared to the RF response (Fig. 14(B)).

In the case of amacrine connectivity, the elicited response is also more localised, as compared to the gain control response, and the flow of activity follows more accurately the stimulus. This is a direct consequence of the sensitivity of the ACell connectivity model to the stimulus acceleration. In this case, the peak response is also more shifted as compared to the gain control case.

Finally, the gap junction connectivity model performs worse in this case, giving rise to a propagating wave that does not follow the trajectory, since the latter is not parallel to the direction to which GCells are sensitive. Cells responding near the trajectory turning point have a higher level of activity and an increased latency, while the peak response roughly corresponds to the gain control case.

Angular anticipation

We investigate in this subsection a two-dimensional example of motion where angular anticipation takes place. The stimulus consists here of 72 frames, displayed at 100 Hz, of a bar moving at a constant angular speed of \(4.25~\text{rad/ms}\).

Figure 16 shows the retina response to a rotating bar with the angular orientation of activity as a function of time for the different models. We used Matlab to estimate the bar orientation from the displayed activity, fitting the set of activated points by an ellipse whose principal axis determines the response orientation.

Figure 16

Anticipation for a rotating bar. (A) Response to a bar rotating at \(4.25~\text{rad/ms}\). The first line shows the stimulus, the second line shows the GCell response with gain control, the third line presents the effect of lateral ACell connectivity with \(w = 0.2~\text{ms}^{-1}\), and the last line shows the effect of asymmetric gap junctions with \(v_{\mathrm{gap}} = 9~\text{mm/s}\). (B) Time course response of the bar orientation in the reconstructed retinal representations

In the three cases, one can see that the response around the centre of the bar is suppressed due to gain control adaptation. While the gain control activity orientation roughly follows the linear response (Fig. 16(B)), the ACell response shows a slight angular shift (frames: 250–300 ms), which is also visible on the response orientation time course. The ACell angular anticipation is, however, only observed during the first period of the bar. Interestingly, this effect vanishes during the second rotation due to a persistent effect of the activation function generating a sort of a suppressive effect erasing the second occurrence of the bar (frame: 450 ms). We shall point out that the ACell connectivity weight in this simulation has been reduced to \(w = 0.2~\mathrm{ms}^{-1}\), since with a value of \(w = 0.3~\mathrm{ms}^{-1}\) used in the previous simulations, the response to the second rotation of the bar is completely suppressed.

Finally, similarly to the parabolic trajectory, the gap junction connectivity model performs worse due to the wave propagating from left to right, distorting once more the bar shape. Consequently, the bar activity orientation in this case has been discarded in Fig. 16(B), the orientation estimate giving poor results.


This section shows how lateral connectivity can play a role in motion anticipation of 2D stimuli, both in the case of the classical flash lag effect and more complex trajectories. Indeed, for a given network setting, ACell connectivity can noticeably improve anticipation with respect to gain control in all three stimuli and has also the advantage of being sensitive to trajectory shifts (Sect. 3.4.2).

While gap junction connectivity improves anticipation when the trajectory of the bar is parallel to the preferred GCells direction, it also induces more blur around the bar and shape distortion in the case of parabolic motion and rotation, suggesting a trade-off between anticipation and object recognition for this specific model.


Using a simplified model, mathematically analysed with numerical simulations examples, we have been able to give strong evidences that lateral connectivity—inhibition with ACells, gap junctions—could participate to motion anticipation in the retina. The main argument is that a moving stimulus can, under specific conditions mathematically controlled, induce a wave of activity which propagates ahead of the stimulus thanks to lateral connectivity. This suggests that, in addition to local gain control mechanism inducing an anticipated peak of GCells activity, lateral connectivity could induce a mechanism of neural latencies reduction similar to what is observed in the cortex [8, 46, 77]. This is visible in particular in Fig. 14, where the gap junction coupling induces a wave which increases the GCell level activity before the bar reaches its RF.

Yet, these studies raise several questions and remarks. The first one is, of course, the biological plausibility. At the core of the model, what makes the mathematical analysis tractable is the fact that we can reduce dynamics, in some region of the phase space, to a linear dynamical system. This structure is afforded by two facts: (i) Synapses are characterised by a simple convolution; (ii) Cells, especially ACells, have a simple passive dynamics, where nonlinear effects induced e.g. by ionic channels are neglected, as well as propagation delays. As stated in the introduction, the goal here is not to be biologically realistic, but instead to illustrate potential general spatio-temporal response mechanisms taking into account specificities of the retina, as compared to e.g. the cortex. Essentially, most neurons are not spiking (except GCells and some type of BCells or ACells, not considered here [2]). Yet, synapses follow the same biophysics as their cortical counterpart. As it is standard to model the whole chain of biophysical machinery triggering a post-synaptic potential upon a sharp increase of the pre-synaptic voltage by a convolution kernel [27], we adopted here the same approach. Note that it is absolutely not required, in this convolutional approach, for the pre-synaptic increase in voltage to be a spike; it can be a smooth variation of the voltage. Note also that higher order convolution kernels can be considered, integrating more details of the biological machinery. These higher order kernels are represented by higher order linear differential equations [35]. Concerning point (ii), nonlinear effects are neglected, especially for ACells (BCells have gain control), there are not so many available models of ACells. A linear model for predictive coding using linear ACells has been used by Hosoya et al. [43]. We discuss it in more detail below. The nonlinear models of ACells we know have been developed to study the retina in its early stage (retinal waves) and feature either AII ACells [21] or starburst ACells [50]. In Sect. 3.2.4 we have briefly commented how nonlinear mechanisms could enhance resonance effects in the network and, thereby, favour the propagation of a lateral wave of activity induced by a moving stimulus. This would of course deserve more detailed study. Another potentially interesting nonlinear mechanism is short term plasticity discussed below.

The second question one may ask about the model is about the robustness of this mechanism with respect to parameters. The model contains many parameters, some of them (BCells, GCells and gain control) coming from the previous paper from Berry et al. [9] and Chen et al. [20]. Although they did not perform a structural stability analysis of their model (i.e. stability of the model with respect to small variations of parameters), we believe that they are tuned away from bifurcation points so that slight changes in their (isolated cells) model parameters would not induce big changes. As we have shown, the situation changes dramatically when cells are connected via lateral connectivity. Here, many types of dynamical behaviour can be expected simply by changing the connectivity patterns in the case of ACells. A more detailed analysis would require a closer investigation of ACell to BCell connectivity and an estimation of synaptic coupling, implying to define more specifically the type of ACell (AII, starburst, A17, wide field, medium field, narrow field, etc.) and the type of functional circuit one wants to consider. Note that ACells are difficult to access experimentally due to their location inside the retina. Even more difficult is a measurement of ACell connectivity, especially the degree of symmetry discussed in our paper. Such studies can be performed at the computational level, though, where the mathematical framework proposed here can be applied and extended. Computational results do not tell us what is the reality, but they shed light on what it could be.

We would like now to address several possible extensions of this work.

The retino-thalamico-cortical pathway

The retina is only the early stage of the visual system. Visual responses are then processed via the thalamus and the cortex. As exposed in the introduction, anticipation is also observed in V1 with a different modality than in the retina. In this paper, our main focus was on the shift of the peak response, while when studying anticipation in the cortex, the main focus lies in the increase of the response latency, i.e. the delay between the time the bar reaches the receptive field of a cortical column and the effective time its activity starts rising. How do these two effects combine? How does retinal anticipation impact cortical anticipation? To answer these questions at a computational level, one would need to propose a model of the retino-thalamico-cortical pathway which, to the best of our knowledge, has never been done. Yet, we have developed a retino-cortical (V1) model—thus, short-cutting the thalamus—based on a mean-field model of the V1 cortex, developed earlier by the groups of F. Chavane and A. Destexhe [19, 93], able to reproduce V1 anticipation as observed in VSDI imaging. The aim of this work is to understand, computationally, the effect of retinal anticipation on the cortical one and more generally the combined effects of motion extrapolation in the retina and V1. This is the object of a forthcoming paper (S. Souihel, M. di Volo, S. Chemla, A. Destexhe, F. Chavane and B. Cessac., in preparation). See [74] for preliminary results.

Retinal circuits

BCells, ACells and GCells are organised into multiple, local, functional circuits with specific connectivity patterns and dynamics in response to stimuli. Each circuit is related to a specific task, such as light intensity or contrast adaptation, motion detection, orientation, motion direction and so on. Here, we have considered a circuit allowing the retina to detect a moving object on a moving background, where motion sensitive retinal cells remain silent under global motion of the visual scene, but fire when the image patch in their receptive field moves differently from the background. From our study we have emitted the hypothesis that this circuit, spread over the retina, could improve motion anticipation thanks to what we have called the “push–pull” effect. Yet, other circuits could be studied in their role to process motion and anticipation. We especially think of the ON-OFF cone and rod-cone pathways responsible for the separation of highlights and shadows, allowing to provide information to the GCells concerning brighter than background stimuli (ON-centre) or darker than background stimuli (OFF-centre) [54]. This circuit involves both gap junction and ACell (AII) connectivity, and our model could allow to study its dynamics in the presence of a moving object.

Adaptation effects

In a paper from 2005, Hosoya et al. [43] studied dynamic predictive coding in the retina and showed how spatio-temporal receptive fields of retinal GCells change after a few seconds in a new environment, allowing the retina to adjust its processing dynamically when encountering changes in its visual environment. They showed that an amacrine network model with plastic synapses can account for the large variety of observed adaptations. They feature a linear network model of ACells, similar to ours, with, in addition, anti-Hebbian plasticity. Their mathematical analysis, based on linear algebra, allows to determine the behaviour of the model in terms of eigenvalues and eigenvectors. However, their analysis does not carry out to the gain control introduced by Berry et al., which, as we show, renders the spectral analysis quite more complex. It would therefore be interesting to explore how plasticity in the ACell synaptic network, conjugated with local gain control, contributes to anticipation.


The trajectory of a moving object—which is, in general, quite more complex than a moving bar with constant speed—involves long-range correlations in space and in time. Local information about this motion is encoded by retinal GCells. Decoders based on the firing rates of these cells can extract some of the motion features [25, 43, 51, 62, 66, 67, 76]. Yet lateral connectivity plays a central role in motion processing (see, e.g. [41]). One may expect it to induce spatial and temporal correlations in spiking activity, as an echo, a trace, of the object’s trajectory. These correlations cannot be read in the variations of firing rate; they also cannot be read in synchronous pairwise correlations, as the propagation of information due to lateral connectivity necessarily involves delays. This example raises the question about what information can be extracted from spatio-temporal correlations in a network of connected neurons submitted to a transient stimulus. What is the effect of the stimulus on these correlations? How can one handle this information from data where one has to measure transient correlations? This question has been addressed in [17]. The potential impact of these spatio-temporal correlations on decoding and anticipating a trajectory will be the object of further studies.

Orientation selective cells

Our model affords the possibility to consider BCells with orientation sensitive RF. The potential role of such BCells for predictive coding has been outlined by Johnston et al. [48]. In their model individual GCells receive excitatory BCell inputs tuned to different orientations, generating a dynamic predictive code, while feed-forward inhibition generates a high-pass filter that only transmits the initial activation of these inputs, removing redundancy. Should such circuits play a role in motion anticipation? We did not elaborate on this in the present paper, leaving it to a potential forthcoming work. Another important question is “how to model a retinal network with cells having different orientation selectivity?” A V1 cortical model has been proposed by Baspinar et al. [7] for the generation of orientation preference maps, considering both orientation and scale features. Each point (cortical column) is characterised by intrinsic variables, orientation and scale, and the corresponding RF is a rotated Gabor function. The visual stimulus is lifted in a four-dimensional space, characterised by coordinate variables, position, orientation and scale. The authors infer from the V1 connectivity a “natural” geometry from which they can apply methods from differential geometry. This type of mathematical construction could be interesting to investigate in the case of the retina with families of orientation selective cells, although the retinal connectivity between these cells is not the same as the V1 orientation preference map structure [12, 13, 82, 91].

Biologically inspired vision systems

When the retina receives a visual stimulus, it determines which component of it is significant and needs to be further transmitted to the brain. This efficient coding heuristic has inspired many recent studies in developing biologically inspired systems, both for static image and motion representations [59, 87, 90]. Two major applications of biologically inspired vision systems are retinal prostheses [53, 63, 79] and navigational robotics [24, 52]. Focusing on the second field of application, the ability of a mobile device to navigate in its environment is of utmost interest, especially in order to avoid dangerous situations such as collisions. To be able to move, the robot requires a mapped representation of its environment, but also the ability to interpret and process this representation. Motion processing mechanisms such as anticipation can be thus implemented to assess the efficiency of bio-inspired vision in obstacle avoidance.

Psychophysical study of motion anticipation

We briefly come back to the flash lag effect introduced in 3.4.1. Neuroscience has explored several explanations for this illusion. The first explanation is that the visual system processes moving objects with a smaller latency than flashed objects [89]. The second explanation suggests that the flash lag effect is due to postdiction: the perception of the flash is conditioned by events happening after its appearance [30, 31]. This hypothesis is inspired by the colour phi illusion, where two dots of different colours appearing at two discrete yet close positions, with a small latency, will be perceived as a single moving dot whose colour has changed. Another explanation includes motion extrapolation. The visual system being predictive, it processes differently a bar in smooth motion, whose motion can be extrapolated, and a flashed bar, which cannot be predicted by the system. The study conducted in this article falls under this third explanation, suggesting that the retina could possibly be assisting the cortex in the motion extrapolation task.

Availability of data and materials

The model source code is available upon request.


  1. 1.

    From (6) \(\frac{d A_{{B}_{i}}}{dt}>0\) if \(A_{i}(t) < h_{B} \tau _{a} V_{i_{\mathrm{drive}}}(t)\). This essentially requires \(\tau _{a}\) to be slow enough.


  1. 1.

    Baccus S, Meister M. Fast and slow contrast adaptation in retinal circuitry. Neuron. 2002;36(5):909–19.

    Article  Google Scholar 

  2. 2.

    Baden T, Berens P, Bethge M, Euler T. Spikes in mammalian bipolar cells support temporal layering of the inner retina. Curr Biol. 2013;23(1):48–52.

    Article  Google Scholar 

  3. 3.

    Baden T, Berens P, Franke K, Rosón MR, Bethge M, Euler T. The functional diversity of retinal ganglion cells in the mouse. Nature. 2016;529:345–50.

    Article  Google Scholar 

  4. 4.

    Baldo MVC, Klein SA. Extrapolation or attention shift? Nature. 1995;378:565–6.

    Article  Google Scholar 

  5. 5.

    Barlow H. Possible principles underlying the transformation of sensory messages. In: Sensory communication. 1961. p. 217–34.

    Google Scholar 

  6. 6.

    Barlow H, Hill R, Levick W. Retinal ganglion cells responding selectively to direction and speed of image motion in the rabbit. J Physiol. 1964;173(3):377.

    Article  Google Scholar 

  7. 7.

    Baspinar E, Citti G, Sarti A. A geometric model of multi-scale orientation preference maps via Gabor functions. J Math Imaging Vis. 2018;60:900–12.

    MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Benvenuti G, Chemla S, Boonman A, Perrinet L, Masson GS, Chavane F. Anticipatory responses along motion trajectories in awake monkey area V1. bioRxiv. 2020.

  9. 9.

    Berry M, Brivanlou I, Jordan T, Meister M. Anticipation of moving stimuli by the retina. Nature. 1999;398(6725):334–8.

    Article  Google Scholar 

  10. 10.

    Borg-Graham L. The computation of directional selectivity in the retina occurs presynaptic to the ganglion cell. Nat Neurosci. 2001;4:176–83.

    Article  Google Scholar 

  11. 11.

    Borst A, Euler T. Seeing things in motion: models, circuits, and mechanisms. Neuron. 2011;71(6):974–94.

    Article  Google Scholar 

  12. 12.

    Bosking W, Crowley J, Fitzpatrick D et al.. Spatial coding of position and orientation in primary visual cortex. Nat Neurosci. 2002;5(9):874–82.

    Article  Google Scholar 

  13. 13.

    Bosking W, Zhang Y, Schofield B, Fitzpatrick D. Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. J Neurosci. 1997;17(6):2112–27.

    Article  Google Scholar 

  14. 14.

    Cessac B. A discrete time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics. J Math Biol. 2008;56(3):311–45.

    MathSciNet  MATH  Article  Google Scholar 

  15. 15.

    Cessac B. A discrete time neural network model with spiking neurons: II: dynamics with noise. J Math Biol. 2011;62(6):863–900.

    MathSciNet  MATH  Article  Google Scholar 

  16. 16.

    Cessac B. Linear response in neuronal networks: from neurons dynamics to collective response. Chaos, Interdiscip J Nonlinear Sci. 2019;29(10):103105.

    MathSciNet  MATH  Article  Google Scholar 

  17. 17.

    Cessac B, Ampuero I, Cofre R. Linear response for spiking neuronal networks with unbounded memory. J Math Neuro. Submitted 2020.

  18. 18.

    Cessac B, Viéville T. On dynamics of integrate-and-fire neural networks with adaptive conductances. Front Neurosci. 2008;2:2.

    Article  Google Scholar 

  19. 19.

    Chemla S, Reynaud A, di Volo M, Zerlaut Y, Perrinet L, Destexhe A, Chavane F. Suppressive traveling waves shape representations of illusory motion in primary visual cortex of awake primate. J Neurosci. 2019;39(22):4282–98.

    Article  Google Scholar 

  20. 20.

    Chen EY, Marre O, Fisher C, Schwartz G, Levy J, da Silviera RA, Berry MJI. Alert response to motion onset in the retina. J Neurosci. 2013;33(1):120–32.

    Article  Google Scholar 

  21. 21.

    Choi H, Zhang L, Cembrowski MS, Sabottke CF, Markowitz AL, Butts DA, Kath WL, Singer JH, Riecke H. Intrinsic bursting of AII amacrine cells underlies oscillations in the RD1 mouse retina. J Neurophysiol. 2014;112(6):1491–504.

    Article  Google Scholar 

  22. 22.

    Colonnier M, O’Kusky J. Number of neurons and synapses in the visual cortex of different species. Rev Can Biol. 1981;40(1):91–9.

    Google Scholar 

  23. 23.

    Coombes S, Lai YM, Şayli M, Thul R. Networks of piecewise linear neural mass models. Eur J Appl Math. 2018;29(5):869–90.

    MathSciNet  MATH  Article  Google Scholar 

  24. 24.

    Delbruck T, Martin K, Liu S-C, Linares-Barranco B, Moeys DP. Analog and digital implementations of retinal processing for robot navigation systems [PhD thesis]. Zurich: ETH; 2016.

  25. 25.

    Deny S, Ferrari U, Macé E, Yger P, Caplette R, Picaud S, Tkačik G, Marre O. Multiplexed computations in retinal ganglion cells of a single type. Nat Commun. 2017;8:1964.

    Article  Google Scholar 

  26. 26.

    Deriche R. Using Canny’s criteria to derive a recursively implemented optimal edge detector. Int J Comput Vis. 1987;1(2):167–87.

    Article  Google Scholar 

  27. 27.

    Destexhe A, Mainen Z, Sejnowski T. An efficient method for computing synaptic conductances based on a kinetic model of receptor binding. Neural Comput. 1994;6(1):14–8.

    Article  Google Scholar 

  28. 28.

    Destexhe A, Mainen Z, Sejnowski T. Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. J Comput Neurosci. 1994;1(3):195–230.

    Article  Google Scholar 

  29. 29.

    Dowling J. Retina: an overview. In: Reference module in biomedical sciences. Amsterdam: Elsevier; 2015.

    Google Scholar 

  30. 30.

    Eagleman DM. Visual illusions and neurobiology. Nat Rev Neurosci. 2001;2:920–6.

    Article  Google Scholar 

  31. 31.

    Eagleman DM, Sejnowski TJ. Motion signals bias localization judgments: a unified explanation for the flash-lag, flash-drag, flash-jump, and Frohlich illusions. J Vis. 2007;7(4):3.

    Article  Google Scholar 

  32. 32.

    Edelman A. The probability that a random real Gaussian matrix has k real eigenvalues, related distributions, and the circular law. J Multivar Anal. 1997;60(2):203–32.

    MathSciNet  MATH  Article  Google Scholar 

  33. 33.

    Enciso GA, Rempe M, Dmitriev AV, Gavrikov KE, Terman D, Mangel SC. A model of direction selectivity in the starburst amacrine cell network. J Comput Neurosci. 2010;18(3):567–78.

    Article  Google Scholar 

  34. 34.

    Euler T, Detwiler P, Denk W. Directionally selective calcium signals in dendrites of starburst amacrine cells. Nature. 2002;418:845–52.

    Article  Google Scholar 

  35. 35.

    Faugeras O, Touboul J, Cessac B. A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs. Front Comput Neurosci. 2009;3:1.

    Article  Google Scholar 

  36. 36.

    Freeman W, Adelson E. The design and use of steerable filters. IEEE Trans Pattern Anal Mach Intell. 1991;13(9):891–906.

    Article  Google Scholar 

  37. 37.

    Fried S, Muench T, Werblin F. Mechanisms and circuitry underlying directional selectivity in the retina. Nature. 2002;420(6914):411–4.

    Article  Google Scholar 

  38. 38.

    Gantmacher FR. The theory of matrices. New York: Chelsea; 1998.

    Google Scholar 

  39. 39.

    Geusebroek J-M, Smeulders A, Weijer J. Fast anisotropic Gauss filtering. IEEE Trans Image Process. 2003;12(8):938–43.

    MathSciNet  MATH  Article  Google Scholar 

  40. 40.

    Girko V. Circular law. Theory Probab Appl. 1984;29:694–706.

    MathSciNet  MATH  Article  Google Scholar 

  41. 41.

    Gollisch T, Meister M. Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron. 2010;65(2):150–64.

    Article  Google Scholar 

  42. 42.

    Hassenstein B, Reichardt W. Systemtheoretische analyse der zeit, reihenfolgen und vorzeichenauswertung. In: The Bewegungsperzeption Des weevil Chlorophanus. Z. Naturforsch. 1956.

    Google Scholar 

  43. 43.

    Hosoya T, Baccus SA, Meister M. Dynamic predictive coding by the retina. Nature. 2005;436:71–7.

    Article  Google Scholar 

  44. 44.

    Hubel DH, Wiesel TN. Receptive fields of optic nerve fibres in the spider monkey. J Physiol. 1960;154:572–80.

    Article  Google Scholar 

  45. 45.

    Jacoby J, Zhu Y, DeVries SH, Schwartz GW. An amacrine cell circuit for signaling steady illumination in the retina. Cell Rep. 2015;13(12):2663–70.

    Article  Google Scholar 

  46. 46.

    Jancke D, Erlaghen W, Schöner G, Dinse H. Shorter latencies for motion trajectories than for flashes in population responses of primary visual cortex. J Physiol. 2004;556:971–82.

    Article  Google Scholar 

  47. 47.

    Johnston J, Lagnado L. General features of the retinal connectome determine the computation of motion anticipation. eLife. 2015;4:e06250.

    Article  Google Scholar 

  48. 48.

    Johnston J, Seibel S-H, Darnet LSA, Renninger S, Orger M, Lagnado L. A retinal circuit generating a dynamic predictive code for oriented features. Neuron. 2019;102(6):1211–22.

    Article  Google Scholar 

  49. 49.

    Kähne M, Rüdiger S, Kihara AH, Lindner B. Gap junctions set the speed and nucleation rate of stage I retinal waves. PLoS Comput Biol. 2019;15(4):1–15.

    Article  Google Scholar 

  50. 50.

    Karvouniari D, Gil L, Marre O, Picaud S, Cessac B. A biophysical model explains the oscillatory behaviour of immature starburst amacrine cells. Sci Rep. 2019;9:1859.

    Article  Google Scholar 

  51. 51.

    Kastner DB, Baccus SA. Spatial segregation of adaptation and predictive sensitization in retinal ganglion cells. Neuron. 2013;79(3):541–54.

    Article  Google Scholar 

  52. 52.

    Lehnert H, Escobar M, Araya M. Retina-inspired visual module for robot navigation in complex environments. In: 2019 international joint conference on neural networks (IJCNN). 2019. p. 1–8.

    Google Scholar 

  53. 53.

    Morillas C, Romero S, Martínez A, Pelayo F, Reyneri L, Bongard M, Fernández E. A neuroengineering suite of computational tools for visual prostheses. Neurocomputing. 2007;70(16):2817–27. Neural Network Applications in Electrical Engineering Selected papers from the 3rd International Work-Conference on Artificial Neural Networks (IWANN 2005).

    Article  Google Scholar 

  54. 54.

    Nelson R, Kolb H. On and off pathways in the vertebrate retina and visual system. Vis Neurosci. 2004;1:260–78.

    Google Scholar 

  55. 55.

    Nguyen HH, O’Rourke S. The elliptic law. Int Math Res Not. 2014;2015(17):7620–89.

    MathSciNet  MATH  Article  Google Scholar 

  56. 56.

    Nijhawan R. Motion extrapolation in catching. Nature. 1994;370:256–7.

    Article  Google Scholar 

  57. 57.

    Nijhawan R. Visual decomposition of colour through motion extrapolation. Nature. 1997;386:66–9.

    Article  Google Scholar 

  58. 58.

    Niu W-Q, Yuan J-Q. Recurrent network simulations of two types of non-concentric retinal ganglion cells. Neurocomputing. 2007;70(13):2576–80. Selected papers from the 3rd International Conference on Development and Learning (ICDL 2004) Time series prediction competition: the CATS benchmark.

    Article  Google Scholar 

  59. 59.

    Oliveira RF, Roque AC. A biologically plausible neural network model of the primate primary visual system. Neurocomputing. 2002;44–46:957–63. Computational Neuroscience Trends in Research, 2002.

    MATH  Article  Google Scholar 

  60. 60.

    Olshausen B, Field D. Sparse coding with an overcomplete basis set: a strategy employed by V1? Vis Res. 1998;37:3311–25.

    Article  Google Scholar 

  61. 61.

    Ölveczky B, Baccus S, Meister M. Segregation of object and background motion in the retina. Nature. 2003;423:401–8.

    Article  Google Scholar 

  62. 62.

    Palmer SE, Marre O, Berry MJ, Bialek W. Predictive information in a sensory population. Proc Natl Acad Sci USA. 2015;112(22):6908–13.

    Article  Google Scholar 

  63. 63.

    Parikh N, Itti L, Weiland J. Saliency-based image processing for retinal prostheses. J Neural Eng. 2010;7(1):016006.

    Article  Google Scholar 

  64. 64.

    Remington LA. Chapter 4—retina. In: Remington LA, editor. Clinical anatomy and physiology of the visual system. 3rd ed. Saint Louis: Butterworth-Heinemann; 2012. p. 61–92.

    Google Scholar 

  65. 65.

    Ruelle D, Takens F. On the nature of turbulence. Commun Math Phys. 1971;20:167–92.

    MathSciNet  MATH  Article  Google Scholar 

  66. 66.

    Salisbury J, Palmer S. Optimal prediction in the retina and natural motion statistics. J Stat Phys. 2016;162:1309–23.

    MathSciNet  MATH  Article  Google Scholar 

  67. 67.

    Sederberg AJ, MacLean JN, Palmer SE. Learning to make external sensory stimulus predictions using internal correlations in populations of neurons. Proc Natl Acad Sci USA. 2018;115(5):1105–10.

    Article  Google Scholar 

  68. 68.

    Segev R, Puchalla J, Berry II MJ. Functional organization of ganglion cells in the salamander retina. J Neurophysiol. 2006;95:2277–92.

    Article  Google Scholar 

  69. 69.

    Seneta E. Non-negative matrices and Markov chains. Berlin: Springer; 2006.

    Google Scholar 

  70. 70.

    Sernagor E, Hennig M. Chapter 49—retinal waves: underlying cellular mechanisms and theoretical considerations. In: Rubenstein JL, Rakic P, editors. Cellular migration and formation of neuronal connections. Oxford: Academic Press; 2013. p. 909–20.

    Google Scholar 

  71. 71.

    Sethuramanujam S, Awatramani GB, Slaughter MM. Cholinergic excitation complements glutamate in coding visual information in retinal ganglion cells. J Physiol. 2018;596(16):3709–24.

    Article  Google Scholar 

  72. 72.

    Sethuramanujam S, McLaughlin AJ, de Rosenroll G, Hoggarth A, Schwab DJ, Awatramani GB. A central role for mixed acetylcholine/gaba transmission in direction coding in the retina. Neuron. 2016;90(6):1243–56.

    Article  Google Scholar 

  73. 73.

    Snellman J, Kaur T, Shen Y, Nawy S. Regulation of on bipolar cell activity. Prog Retin Eye Res. 2008;27(4):450–63.

    Article  Google Scholar 

  74. 74.

    Souihel S. Generic and specific computational principles for visual anticipation of motion trajectories [PhD thesis]. Université Nice Côte d’Azur; EDSTIC; 2019.

  75. 75.

    Srinivas M, Rozental R, Kojima T, Dermietzel R, Mehler M, Condorelli DF, Kessler JA, Spray DC. Functional properties of channels formed by the neuronal gap junction protein connexin36. J Neurosci. 1999;19(22):9848–55.

    Article  Google Scholar 

  76. 76.

    Srinivasan M, Laughlin S, Dubs A. Predictive coding: a fresh view of inhibition in the retina. Proc R Soc Lond B, Biol Sci. 1982;216(1205):427–59.

    Article  Google Scholar 

  77. 77.

    Subramaniyan M, Ecker AS, Patel SS, Cotton RJ, Bethge M, Pitkow X, Berens P, Tolias AS. Faster processing of moving compared with flashed bars in awake macaque V1 provides a neural correlate of the flash lag illusion. J Neurophysiol. 2018;120(5):2430–52.

    Article  Google Scholar 

  78. 78.

    Tauchi M, Masland R. The shape and arrangement of the cholinergic neurons in the rabbit retina. Proc R Soc Lond B, Biol Sci. 1984;223:101–19.

    Article  Google Scholar 

  79. 79.

    Tran TK. Large scale retinal modeling for the design of new generation retinal prostheses. 2015.

  80. 80.

    Trenholm S, Schwab D, Balasubramanian V, Awatramani G. Lag normalization in an electrically coupled neural network. Nat Neurosci. 2013;16:154–6.

    Article  Google Scholar 

  81. 81.

    Tukker JJ, Taylor WR, Smith RG. Direction selectivity in a model of the starburst amacrine cell. Vis Neurosci. 2004;21(4):611–25.

    Article  Google Scholar 

  82. 82.

    Tversky T, Miikkulainen R. Modeling directional selectivity using self-organizing delay-adaptation maps. Neurocomputing. 2002;44–46:679–84. Computational Neuroscience Trends in Research, 2002.

    MATH  Article  Google Scholar 

  83. 83.

    Unser M. Fast Gabor-like windowed Fourier and continuous wavelet transforms. IEEE Signal Process Lett. 1994;1(5):76–9.

    Article  Google Scholar 

  84. 84.

    Valois RLD, Valois KKD. Vernier acuity with stationary moving gabors. Vis Res. 1991;31(9):1619–26.

    Article  Google Scholar 

  85. 85.

    Vaney DI, Sivyer B, Taylor WR. Direction selectivity in the retina: symmetry and asymmetry in structure and function. Nat Rev Neurosci. 2012;13:194–208.

    Article  Google Scholar 

  86. 86.

    Völgyi B, Pan F, Paul DL, Wang JT, Huberman AD, Bloomfield SA. Gap junctions are essential for generating the correlated spike activity of neighboring retinal ganglion cells. PLoS ONE. 2013;8(7):e69426.

    Article  Google Scholar 

  87. 87.

    Wei H, Zuo Q. A biologically inspired neurocomputing circuit for image representation. Neurocomputing. 2015;164:96–111.

    Article  Google Scholar 

  88. 88.

    Wei W, Hamby A, Zhou K, Feller M. Development of asymmetric inhibition underlying direction selectivity in the retina. Nature. 2010;469(7330):402–6.

    Article  Google Scholar 

  89. 89.

    Whitney D, Murakami I. Latency difference, not spatial extrapolation. Nat Neurosci. 1998;1:656–7.

    Article  Google Scholar 

  90. 90.

    Xu J, Park SH, Zhang X. A bio-inspired motion sensitive model and its application to estimating human gaze positions under classified driving conditions. Neurocomputing. 2019;345:23–35. Deep Learning for Intelligent Sensing, Decision-Making and Control.

    Article  Google Scholar 

  91. 91.

    Xu X, Bosking W, Sáry G, Stefansic J, Shima D, Casagrande V. Functional organization of visual cortex in the owl monkey. J Neurosci. 2004;24(28):6237.

    Article  Google Scholar 

  92. 92.

    Yu Y, Sing Lee T. Adaptive contrast gain control and information maximization. Neurocomputing. 2005;65–66:111–6. Computational Neuroscience: Trends in Research, 2005.

    Article  Google Scholar 

  93. 93.

    Zerlaut Y, Chemla S, Chavane F, Destexhe A. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons. J Comput Neurosci. 2018;44(1):45–61.

    MathSciNet  MATH  Article  Google Scholar 

  94. 94.

    Zheng J, Lee S, Zhou ZJ. A transient network of intrinsically bursting starburst cells underlies the generation of retinal waves. Nat Neurosci. 2006;9(3):363–71.

    Article  Google Scholar 

Download references


We warmly acknowledge Olivier Marre and Frédéric Chavane for their insightful comments, as well as Michael Berry, Matthias Hennig, Benoit Miramond, Stephanie Palmer and Laurent Perrinet for their thorough feedback as Jury members of Selma Souihel’s PhD.


This work was supported by the National Research Agency (ANR), in the project “Trajectory”,, funding Selma Souihel’s PhD, and by the interdisciplinary Institute for Modelling in Neuroscience and Cognition (NeuroMod of the Université Côte d’Azur.

Author information




Both authors contributed to the final version of the manuscript. BC supervised the project. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Selma Souihel.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

The authors transfer to Springer the non-exclusive publication rights, and they warrant that their contribution is original and that they have full power to make this grant.


Appendix A: Parameters of the model

Table 1 Model parameters values used in simulations, unless stated otherwise

Appendix B: Spatio-temporal filtering

B.1 Receptive fields

The spatial kernel of the BCell i is modelled with a difference of Gaussians (DOG):

$$ {\mathcal {K}}_{{B}_{i},S}(x,y) = \frac{A_{1}}{2 \pi \sqrt{\det C_{1}}} e^{- \frac{1}{2} \tilde{X_{i}}.C_{1}^{-1}.X_{i}} - \frac{A_{2}}{2 \pi \sqrt{\det C_{2}}} e^{-\frac{1}{2} \tilde{X_{i}}.C_{2}^{-1}.X_{i}}, $$

where X i =( x x i y y i ), \(\widetilde{\hphantom{0}} \) denotes the transpose, \(x_{i}\) and \(y_{i}\) are the coordinates of the receptive field centre which coincide with the coordinates of the cell, \(C_{1}\), \(C_{2}\) are positive definite matrix whose main principal axis represents the preferred orientation. For circular DOGs(no preferred orientation) \(C_{1} \equiv \sigma _{1}^{2} {\mathcal {I}}\), \(C_{2} \equiv \sigma _{2}^{2} {\mathcal {I}}\) where \({\mathcal {I}}\) is the identity matrix in two dimensions. The two Gaussians of the DOG are thus concentric. They have the same principal axes. \(X_{i}\) has the physical dimension of a length (mm), thus the entries of \(C_{a}\), \(a=1,2\), are expressed in \(\mathrm{mm}^{2}\). \(A_{a}\), \(a=1 , \ldots, 2\), have the dimension of mV so that convolution (1) has the dimension of a voltage.

We model the temporal part of the RF with a difference of non-concentric Gaussians whose integral on the time domain is zero. This kernel well fits the shape of the temporal projection of the bipolar RF observed in experiments [74].

$$ {\mathcal {K}}_{T}(t) = \biggl( \frac{K_{1}}{\sqrt{2\pi } \sigma _{1} } e^{ - \frac{ ( t- \mu _{1} )^{2} }{2\sigma _{1} ^{2} }} - \frac{K_{2}}{\sqrt{2\pi } \sigma _{2} } e^{ - \frac{ ( t- \mu _{2} )^{2} }{2\sigma _{2} ^{2} }} \biggr) H(t), $$

where \(H(t)\) is the Heaviside function. The parameters \(\mu _{b}\), \(\sigma _{b}\), \(b=1,2\), have the dimension of a time (s), whereas \(K_{b}\) are dimensionless. The following condition must hold to ensure the continuity of \({\mathcal {K}}_{T}(t)\) at zero:

$$ \frac{K_{1}}{\sigma _{1}}e^{-\frac{\mu _{1}^{2}}{2\sigma _{1}^{2}}} = \frac{K_{2}}{\sigma _{2}}e^{-\frac{\mu _{2}^{2}}{2\sigma _{2}^{2}}}. $$

Thus, \({\mathcal {K}}_{{B}_{i}}(x,y,0)=0\). In addition, we require that the integral of a constant stimulus converges to zero, so that the cell is only reactive to changes. This reads as follows:

$$ K_{1} \Pi \biggl( \frac{\mu _{1}}{\sigma _{1}} \biggr)=K_{2} \Pi \biggl( \frac{\mu _{2}}{\sigma _{2}} \biggr), $$


$$ \Pi (x)=\frac{1}{\sqrt{2 \pi }} \int _{- \infty }^{x} e^{- \frac{y^{2}}{2}} \,dy $$

is the cumulative distribution function of the standard Gaussian probability.

B.2 Numerical convolution

Here we describe the method used to numerically integrate convolution (1) of the receptive field \({\mathcal {K}}_{{B}_{i},S}\) (56) in the spatial domain with a stimulus \({\mathcal {S}}\). For the sake of clarity, we restrict the computation to one Gaussian in the DOG. The extension to a difference of Gaussian is straightforward. In the following, we consider a spatially discretized stimulus. When dealing with a 2D stimulus, we have to integrate over two axes. In the case where the eigenvectors of the 2D of Gaussians are the axes of integration, the spatial filter is separable in the stimulus coordinate system. Considering the stimulus as a grid of pixels, we can integrate using the following discretization: let \(L_{x}\) be the size of the stimulus along the x axis in pixels, \(L_{y}\) be its size along the y axis, and δ be the pixel length. We set \(S_{ij}(t) \equiv S(i\delta , j \delta , u)\), with \(i=0, \dots , \frac{L_{x}}{\delta }\) and \(j=0, \dots , \frac{L_{y}}{\delta }\). The spatial convolution becomes then

$$\begin{aligned} &[ {\mathcal {K}}_{{B}_{i}} \stackrel {x,y}{\ast } {\mathcal {S}}](t) \\ &\quad = \frac{1}{2\pi \sigma _{x} \sigma _{y}} \int \int _{\mathbb{R}^{2}}S(x,y,t) e^{- \frac{(x- x_{0})^{2}}{2\sigma _{x}^{2}} - \frac{(y- y_{0})^{2}}{2\sigma _{y}^{2}}}\,dx \,dy \\ &\quad =\sum_{i,j}S_{ij}(t) \biggl[ \operatorname{erf}\biggl( \frac{i+\delta -x_{0}}{\sqrt{2}\sigma _{x}}\biggr)-\operatorname{erf}\biggl( \frac{i-x_{0}}{\sqrt{2}\sigma _{x}}\biggr) \biggr] \biggl[ \operatorname{erf}\biggl( \frac{j+\delta -y_{0}}{\sqrt{2}\sigma _{y}}\biggr)-\operatorname{erf}\biggl( \frac{j-y_{0}}{\sqrt{2}\sigma _{y}} \biggr) \biggr]. \end{aligned}$$

In the case where the eigenvectors of the 2D of Gaussians are not the axes of integration, the spatial filter is not separable in the stimulus coordinates system. There exist methods that perform the computation by making a linear combination of basis filters [36], others that use Fourier based deconvolution techniques [83] and others using recursive filtering techniques [26]. However, these methods are of high computational complexity. We choose instead to use a computer vision method from Geusenroek et al. [39].

It is based on a projection in a non-orthogonal basis, where the first axis is x and the second is parametrized by an angle ϕ (see Fig. 17). The new standard deviations read as follows:

$$\begin{aligned}& \sigma _{x'}= \frac{\sigma _{x}\sigma _{y}}{\sqrt{\sigma _{x}^{2}\cos {\theta }^{2}+\sigma _{y}^{2}\sin {\theta }^{2}}}, \\ & \sigma _{\phi }= \frac{\sqrt{\sigma _{y}^{2}\cos {\theta }^{2}+\sigma _{x}^{2}\sin {\theta }^{2}}}{\sin {\phi }} \end{aligned}$$


$$ \tan (\phi )= \frac{\sigma _{y}^{2}\cos {\theta }^{2}+\sigma _{x}^{2}\sin {\theta }^{2}}{(\sigma _{x}^{2}-\sigma _{y}^{2})\cos {\theta }\sin {\theta }} $$

and \(\sigma _{x} \neq \sigma _{y}\) (in the orientation sensitive case).

Figure 17

Filter transformation description [39]. The original system of axes is represented by x and y, and the ellipse system of axes by u and v. ϕ represents the angle of the second axis of the non-orthogonal basis. The integration domain of a pixel is limited by four lines of equations: \(x = i\delta \), \(x = (i+1)\delta \), \(y = j\delta \) and \(y = (j+1)\delta \). Rewriting these fours equations in the new system of axes through a coordinate change enables us to write Eq. (61)

We adapt the implementation to the spatially discretized stimulus using an integration scheme similar to the one introduced in the separable case. The spatial convolution reads now as follows:

$$ \begin{aligned} & [ {\mathcal {K}}_{{B}_{i}} \stackrel {x,y}{\ast } {\mathcal {S}}](x_{0},y_{0},t) \\ &\quad = \sigma _{x'}\sqrt{\frac{\pi }{2}}\sum _{(i;j)\in [0,s_{x}]\times [0,s_{y}]} \int _{y\frac{\delta }{\sin (\phi )}}^{(y+1) \frac{\delta }{\sin (\phi )}}C_{ij}e^{ \frac{(y'- y_{0})^{2}}{2\sigma _{\phi }^{2}}} \\ &\qquad {}\times \biggl[\operatorname{erf}\biggl( \frac{(-\cos (\phi )y'+x+1)\delta -x_{0}}{\sqrt{2}\sigma _{x'}}\biggr)-\operatorname{erf} \biggl( \frac{(-\cos (\phi )y'+x)\delta -x_{0}}{\sqrt{2}\sigma _{x'}}\biggr)\biggr] \,dy', \end{aligned} $$

where \(s_{x}\) (resp. \(s_{y}\)) denotes the size of the stimulus in pixels along the x axis (resp. y axis), and \(C_{ij}\) is the contrast at the pixel located at \((i,j)\).

The integral is then computed numerically. The advantage of this formulation is to replace a two-dimensional integration by a one-dimensional one.

Appendix C: Random connectivity

Here, we define the random connectivity matrix from ACell to BCells considered in Sect. 2.3.3. Each cell (ACell and BCell) has a random number of branches (dendritic tree), each of which has a random length and a random angle with respect to the horizontal axis. The length of branches L follows an exponential distribution

$$ f_{L}(l)=\frac{1}{\xi }e^{-\frac{l}{\xi }},\quad l \geq 0, $$

with spatial scale ξ. The number of branches n is also a random variable, Gaussian with mean and variance \(\sigma _{n}\). The angle distribution is taken to be isotropic in the plane, i.e. uniform on \([0,2 \pi [\). When a branch of an ACell A intersects a branch of a BCell B, there is a chemical synapse from A to B.

Here, we assume that both cell types have the same probability distributions for branches, thus neglecting the actual shape of ACell and BCell dendritic trees. On biological grounds, this assumption is relevant if we consider the shape of BCell dendritic tree in the inner plexiform layer (IPL) (see e.g Figs. 5, 16, 17). While out of the IPL, BCells have the form of a dipole, in the IPL their dendrites have a form well approximated by our two-dimensional model. A potential refinement would consist of considering different set of parameters in the probability laws respectively defining BCell and ACell dendritic tree.

We show in Fig. 18 an example of connectivity matrix produced this way, as well as the probability that two branches intersect as a function of the distance of the two cells.

Figure 18

Random connectivity. Left. Example of a random connectivity matrix from ACells \({A}_{j}\) to BCells cells \({B}_{i}\). White points correspond to connection from \({A}_{j}\) to \({B}_{i}\). Right. Probability \(P(d)\) that two branches intersect as a function of the distance between two cells. ‘Exp’ corresponds to numerical estimation and ‘Th’ corresponds to the theoretical prediction. Here, \(\xi =2\)

We now compute this probability. We use the standard notation in probability theory where the random variable is written in capitals and its realisation in a small letter. Thus, \(F_{X}(x) =\mathbb{P} [ X < x ]\) is the cumulative distribution function of the random variable X and \(f_{X}(x)=\frac{d F_{X}}{dx}\) is its density.

We consider the oriented connection between the cell A (ACell) of coordinates \((x_{A},y_{A})\) to a cell B (BCell) of coordinates \((x_{B},y_{B})\), so that the distance between the two cells is \(d_{AB}=\sqrt{ ( x_{B}-x_{A} )^{2} + ( y_{B}-y_{A} )^{2}}\).

The vector \(\vec{AB}\) makes an oriented angle \(\eta =\widehat{ ( \vec{AB},\vec{Ax} )}\) with the positive horizontal axis, where

$$ \eta = \textstyle\begin{cases} \arctan \frac{y_{B}-y_{A}}{x_{B}-x_{A}}, &\mbox{if } x_{B} > x_{A}; \\ \pi +\arctan \frac{y_{B}-y_{A}}{x_{B}-x_{A}}, &\mbox{if } x_{B} < x_{A}. \end{cases} $$

Here, we neglect the effects of boundaries, taking e.g. an infinite lattice or periodic boundary conditions, so that the probability to connect A to B is invariant by rotation. Thus, we compute this probability in the first quadrant \(x_{B} > x_{A}\), \(y_{B} > y_{A}\). In this case \(\eta =\arctan \frac{y_{B}-y_{A}}{x_{B}-x_{A}}\).

Each cell has a random number of branches (dendritic tree), each of which has a random length and a random angle with respect to the horizontal axis (Fig. 19). The length of branches L follows the exponential distribution (62): \(f_{L}(l)=\frac{1}{\xi }e^{-\frac{l}{\xi }}\), \(l \geq 0 \), with repartition function

$$ F_{L}(l)=1-e^{-\frac{l}{\xi }}. $$

The spatial scale ξ favours short range connections. The number of branches N distribution follows a normal distribution with mean and variance \(\sigma _{n}\). The angle distribution is taken to be isotropic in the plane, i.e. uniform on \([0,2 \pi [\).

Figure 19

Geometry of connection between two neurons. α (β) is the angle of the neuron A’s branch with length \(L_{A}\) (neuron B’s branch with length \(L_{B}\)) with respect to the horizontal axis. θ is the angle between the segment connecting AB and the branch A. C represents the virtual point that lies at the intersection of the branches of length \(L_{A}\) and \(L_{B}\). \(d_{AB}\) (resp. \(d_{AC}\), \(d_{BC}\)) denotes the distance between A and B (resp. A, C and B, C). Note that \(d_{AC} \leq L_{A}\), \(d_{BC} \leq L_{B}\)

We compute the probability that a branch of ACell A of length \(L_{A}\) intersects at point C a branch of BCell B of length \(L_{B}\). We denote by α the oriented angle \(\widehat{ ( \vec{Ax},\vec{AC} )}\); by β the oriented angle \(\widehat{ ( \vec{Bx},\vec{BC} )}\); by θ the oriented angle \(\widehat{ ( \vec{AB},\vec{AC} )}\). In the first quadrant, \(\alpha =\theta +\eta \). Note that the condition to be in the first quadrant constraints η but not α.

From the sin rule we have \(\frac{\sin ( \beta -\alpha +\theta )}{d_{AC}}= \frac{\sin \theta }{d_{BC}}= \frac{\sin ( \beta -\alpha )}{d_{AB}}\). This holds, however, if A, B, C is a triangle, that is, if the two branches are long enough to intersect at C, which reads: \(0 \leq d_{AC} = \frac{\sin ( \beta -\alpha +\theta )}{\sin ( \beta -\alpha )} d_{AB} \leq L_{A}\) and \(0 \leq d_{BC} = \frac{\sin \theta }{\sin ( \beta -\alpha )} d_{AB} \leq L_{B}\). These are necessary and sufficient conditions for the branches to intersect.

Note that the positivity of these quantities imposes conditions linking the angles α, β, η with \(\theta =\alpha -\eta \).

  1. 1.

    If \(\sin ( \beta -\alpha ) > 0\) \(\Leftrightarrow 0 < \beta - \alpha < \pi \) \(\Leftrightarrow \alpha < \beta < \pi +\alpha \), we must have \(\sin \theta > 0 \Leftrightarrow 0 < \theta =\alpha - \eta < \pi \) so that \(\eta < \alpha < \pi +\eta \) and \(\sin ( \beta -\alpha +\theta )>0 \Leftrightarrow 0 < \beta -\alpha +\theta =\beta - \eta < \pi \) so that \(\eta < \beta < \pi +\eta \) (because \(\eta \geq 0\)). All these constraints are satisfied if \(\eta < \alpha < \beta < \min ( \pi +\alpha , \pi +\eta )=\pi + \eta \).

  2. 2.

    If \(\sin ( \beta -\alpha ) < 0\) \(\Leftrightarrow - \pi < \beta - \alpha < 0\) \(\Leftrightarrow -\pi +\alpha < \beta < \alpha \), we must have \(\sin \theta < 0 \Leftrightarrow -\pi < \theta =\alpha - \eta < 0\) so that \(-\pi + \eta < \alpha < \eta \) and \(\sin ( \beta -\alpha +\theta )<0 \Leftrightarrow -\pi < \beta -\alpha +\theta =\beta - \eta <0\) so that \(-\pi +\eta < \beta < \eta \). All these constraints are satisfied if \(\max ( -\pi +\alpha ,-\pi +\eta ) = -\pi +\eta < \beta < \alpha < \eta \).

Modulo these conditions, the conditional probability \(\rho _{c| ( \alpha ,\beta )}\) to have intersection given the angles α, β is

$$\begin{aligned} \rho _{c| ( \alpha ,\beta )}&=\mathbb{P} [ \mathrm{Connection} \mid \alpha ,\beta ] \\ &= \mathbb{P} \biggl[ 0 \leq \frac{\sin ( \beta -\alpha +\theta )}{\sin ( \beta -\alpha )} d_{AB} \leq L_{A},0 \leq \frac{\sin \theta }{\sin ( \beta -\alpha )} d_{AB} \leq L_{B} \Bigm| \alpha ,\beta \biggr]. \end{aligned}$$

Using the cumulative distribution function (64) of the exponential distribution and the independence of \(L_{A}\), \(L_{B}\) gives

$$ \rho _{c| ( \alpha ,\beta )}= \biggl( 1-F_{L} \biggl( \frac{\sin ( \beta -\alpha +\theta )}{\sin ( \beta -\alpha )} d_{AB} \biggr) \biggr) \biggl( 1-F_{L} \biggl( \frac{\sin \theta }{\sin ( \beta -\alpha )} d_{AB} \biggr) \biggr) =e^{-\frac{d_{AB}}{\xi } \frac{\sin ( \frac{\alpha +\beta }{2} - \eta )}{ \sin ( \frac{\beta -\alpha }{2} )}}. $$

The probability to connect the two branches in the first quadrant is then

$$ \begin{aligned} \rho _{c}(d_{AB},\eta ) ={}& \frac{1}{4 \pi ^{2}} \biggl( \int _{ \alpha =\eta }^{\pi +\eta } \int _{\beta =\alpha }^{\pi +\eta } e^{- \frac{d_{AB}}{\xi } \frac{\sin ( \frac{\alpha +\beta }{2} - \eta )}{ \sin ( \frac{\beta -\alpha }{2} )}}\, d\alpha\, d \beta \\ &{}+ \int _{\alpha =-\pi +\eta }^{\eta } \int _{ \beta =-\pi +\eta }^{\alpha } e^{-\frac{d_{AB}}{\xi } \frac{\sin ( \frac{\alpha +\beta }{2} - \eta )}{ \sin ( \frac{\beta -\alpha }{2} )}} \, d\alpha\, d \beta \biggr), \end{aligned} $$

which depends on the distance between the two cells and their angle η, depending parametrically on the characteristic length ξ. Note that the condition of positivity of the sine ratio ensures an exponential decay of the probability as \(d_{AB}\) increases.


The positivity of arguments in the exponential implies that

$$ 4 \pi ^{2} \rho _{c}(d_{AB},\eta ) \leq \int _{\alpha =\eta }^{ \pi +\eta } \int _{\beta =\alpha }^{\pi +\eta }\, d\alpha\, d\beta + \int _{\alpha =-\pi +\eta }^{\eta } \int _{\beta =-\pi +\eta }^{ \alpha } \, d\alpha\, d\beta =\pi ^{2}, $$

so that \(\rho _{c}(d_{AB},\eta ) \leq \frac{1}{4}\).

Appendix D: Linear analysis

D.1 General solution of the linear dynamical system

Here, we consider dynamical system (34), \(\frac{d \vec {{\mathcal {X}}}}{dt} = {\mathcal {L}}.\vec {{\mathcal {X}}}+ \vec {{\mathcal {F}}}(t)\), whose general solution is

$$ \vec {{\mathcal {X}}}(t)= \int _{t_{0}}^{t} e^{{\mathcal {L}}(t-s)}.\vec {{\mathcal {F}}}(s) \,ds. $$

The behaviour of this integral depends on the spectrum of \({\mathcal {L}}\). The difficulty is that \({\mathcal {L}}\) is not diagonalisable (because of the activity term \(h_{B} I_{N,N}\)). We write it in the following form:

L= ( ( I N , N τ B W B A W A B I N , N τ A ) M 0 N , N 0 N , N 0 N , N 0 N , N I N , N τ a ) D + ( 0 N , N 0 N , N 0 N , N 0 N , N 0 N , N 0 N , N h B I N , N 0 N , N 0 N , N ) J .

We assume that the matrix \({\mathcal {M}}\) is diagonalisable. Even in this case, \({\mathcal {L}}\) is not diagonalisable because of the Jordan matrix \({\mathcal {J}}\). We denote by \(\vec {\phi }_{\beta }\) the normalised eigenvectors of \({\mathcal {M}}\) and by \(\lambda _{\beta }\) the corresponding eigenvalues with \(\beta =1 , \ldots, 2N\). The eigenvalues of \({\mathcal {D}}\) are then the 2N eigenvalues of \({\mathcal {M}}\) plus N eigenvalues \(-\frac{1}{\tau _{a}}\). We denote them by \(\lambda _{\beta }\) too, with \(\lambda _{\beta }=-\frac{1}{\tau _{a}}\), \(\beta =2N+1 , \ldots, 3N\). The eigenvectors of \({\mathcal {D}}\) have the form

P β = { ( ϕ β 0 N ) , β = 1 , , 2 N ; e β , β = 2 N + 1 , , 3 N ,

where \(\vec {0}_{N}\) is the N-dimensional vector with entries 0 and \(\vec {e}_{\beta }\) is the canonical basis vector in direction β. The matrix \({\mathcal {P}}\) made by the columns \(\vec {{\mathcal {P}}}_{\beta }\) is the matrix which diagonalises \({\mathcal {D}}\). We denote by \(\Lambda ={\mathcal {P}}^{-1}{\mathcal {D}}{\mathcal {P}}\) the diagonal form, where \(\Lambda =\operatorname{Diag} \{ \lambda _{\beta }, \beta =1 , \ldots, 3N \} \). \({\mathcal {P}}\) and \({\mathcal {P}}^{-1}\) have the following block form:

$$ {\mathcal {P}}= \begin{pmatrix} \Phi & 0_{2N,N} \\ 0_{N,2N} & I_{N,N} \end{pmatrix};\qquad y {\mathcal {P}}^{-1}= \begin{pmatrix} \Phi ^{-1} & 0_{2N,N} \\ 0_{N,2N} & I_{N,N} \end{pmatrix}, $$

where Φ is the matrix whose columns are the eigenvectors \(\vec {\phi }_{\beta }\) of \({\mathcal {M}}\). This form implies that \({\mathcal {P}}_{\alpha \beta }={\mathcal {P}}^{-1}_{\alpha \beta }=\delta _{ \alpha \beta }\) for \(\beta =2N+1 , \ldots, 3N\).

We now compute \(e^{{\mathcal {L}}t}\) using the series expansion \(e^{{\mathcal {L}}t} = \sum_{n=0}^{+\infty } \frac{t^{n}}{n!} ( {\mathcal {D}}+{\mathcal {J}})^{n}\). Using the relations

$$ {\mathcal {J}}^{2}=0_{3N,3N};\qquad {\mathcal {D}}^{n}.{\mathcal {J}}= \biggl( - \frac{1}{\tau _{a}} \biggr)^{n} {\mathcal {J}}, $$

one proves that

$$ ( {\mathcal {D}}+{\mathcal {J}})^{n} = {\mathcal {D}}^{n} + {\mathcal {J}}. \sum_{k=0}^{n-1} \biggl( - \frac{1}{\tau _{a}} \biggr)^{k} {\mathcal {D}}^{n-1-k}. $$


$$ e^{{\mathcal {L}}t} = e^{{\mathcal {D}}t} + {\mathcal {J}}. \sum _{n=1}^{+\infty } \frac{t^{n}}{n!} \sum _{k=0}^{n-1} \biggl( -\frac{1}{\tau _{a}} \biggr)^{k} {\mathcal {D}}^{n-1-k}. $$

We use the matrices \({\mathcal {P}}\), \({\mathcal {P}}^{-1}\) to write it in the form

$$ e^{{\mathcal {L}}t} = {\mathcal {P}}.e^{\Lambda t}.{\mathcal {P}}^{-1} + {\mathcal {J}}. \sum_{n=1}^{+\infty } \frac{t^{n}}{n!} \sum_{k=0}^{n-1} \biggl( - \frac{1}{\tau _{a}} \biggr)^{k} {\mathcal {P}}.\Lambda ^{n-1-k}. {\mathcal {P}}^{-1}. $$

From relation (36) we obtain, for the entries of \(\vec {{\mathcal {X}}}(t)\),

$$ \begin{aligned} {\mathcal {X}}_{\alpha }(t) ={}& \sum _{\beta ,\gamma =1}^{3N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} {\mathcal {F}}_{\gamma }(s) \,ds \\ &{}+ \sum_{\delta ,\beta , \gamma =1}^{3N} {\mathcal {J}}_{\alpha ,\delta } {\mathcal {P}}_{\delta \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} \sum _{n=1}^{+ \infty } \frac{(t-s)^{n}}{n!} \sum _{k=0}^{n-1} \biggl( - \frac{1}{\tau _{a}} \biggr)^{k} \lambda _{\beta }^{n-1-k} {\mathcal {F}}_{\gamma }(s) \,ds. \end{aligned} $$

We consider the first term of this equation. We use \({\mathcal {F}}_{\gamma }=F_{{B}_{i}}\), \(\gamma =i=1 , \ldots, N\) (BCells). We recall that, from (13), \(F_{{B}_{i}}(t)= \frac{V_{i_{\mathrm{drive}}}}{\tau _{B}} + \frac{d V_{i_{\mathrm{drive}}}}{d t}\), so that

$$ \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} {\mathcal {F}}_{\gamma }(s) \,ds= V_{\gamma _{\mathrm{drive}}}(t)+ \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{ \gamma _{\mathrm{drive}}}(s) \,ds. $$


$$ \sum_{\beta =1}^{3N}\sum _{\gamma =1}^{3N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma }V_{\gamma _{\mathrm{drive}}}(t) =\sum _{\gamma =1}^{3N} V_{\gamma _{\mathrm{drive}}}(t) \Biggl( \sum _{\beta =1}^{3N} {\mathcal {P}}_{ \alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \Biggr) = \sum _{ \gamma =1}^{3N} V_{\gamma _{\mathrm{drive}}}(t) \delta _{\alpha \gamma } = V_{ \alpha _{\mathrm{drive}}}(t). $$

We extend the definition of drive term (1) to 3N dimensions such that \(V_{\alpha _{\mathrm{drive}}}(t)=0\) if \(\alpha > N\). Thus

$$\begin{aligned} &\sum_{\beta ,\gamma =1}^{3N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{ \beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} {\mathcal {F}}_{\gamma }(s) \,ds \\ &\quad = V_{\alpha _{\mathrm{drive}}}(t) + \sum _{\beta =1}^{3N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{-\infty }^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds. \end{aligned}$$

We decompose the sum over β in three sums: \(\beta =1 , \ldots, N\) corresponding to BCells; \(\beta =N+1 , \ldots, 2N\) corresponding to ACells; \(\beta =2N+1 , \ldots, 3N\) corresponding to activities of BCells. We define (Eq. (38) in the text):

$$ {\mathcal {E}}^{B}_{B,\alpha }(t)=\sum _{\beta =1}^{N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds,\quad \alpha =1 , \ldots, N, $$

corresponding to the indirect effect, via the ACell connectivity, of the drive on BCell voltages. The term

$$ {\mathcal {E}}^{B}_{A,\alpha }(t)=\sum _{\beta =N+1}^{2N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds, \quad \alpha =N+1 , \ldots, 2N $$

(Eq. (39) in the text) corresponds to the effect of BCell drive on ACell voltages. The third term

$$ \sum_{\beta =2N+1}^{3N} \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \sum _{\gamma =1}^{N} {\mathcal {P}}_{\alpha \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds = 0,\quad \alpha =2N+1 , \ldots, 3N, $$

because \({\mathcal {P}}^{-1}_{\beta \gamma }=\delta _{\beta \gamma }\).

To compute the second term in Eq. (67), we first first remark that \({\mathcal {J}}_{\alpha ,\delta }=0\) if \(\alpha =1 , \ldots, 2N\) and \({\mathcal {J}}_{\alpha ,\delta }=h_{B} \delta _{\alpha -2N,\delta }\) if \(\alpha =2N+1 , \ldots, 3N\), so that this term is nonzero only if \(\alpha =2N+1 , \ldots, 3N\) (BCell activities). Also \({\mathcal {F}}_{\gamma }\neq 0\) for \(\gamma =1 , \ldots, N\), while \({\mathcal {P}}^{-1}_{\beta \gamma } = \delta _{\beta \gamma }\) for \(\beta =2N+1 , \ldots, 3N\). Therefore, for \(\alpha =2N+1 , \ldots, 3N\), the second term in \({\mathcal {X}}_{\alpha }(t)\) is

$$ h_{B} \sum_{\beta =1}^{2N} \sum_{\gamma =1}^{N} {\mathcal {P}}_{ \alpha -2N \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} \sum _{n=1}^{+\infty } \frac{(t-s)^{n}}{n!} \sum _{k=0}^{n-1} \biggl( -\frac{1}{\tau _{a}} \biggr)^{k} \lambda _{\beta }^{n-1-k} {\mathcal {F}}_{\gamma }(s) \,ds. $$

We now simplify the series:

$$\begin{aligned} \sum_{n=1}^{+\infty } \frac{(t-s)^{n}}{n!} \sum_{k=0}^{n-1} \biggl( - \frac{1}{\tau _{a}} \biggr)^{k} \lambda _{\beta }^{n-1-k} &= \sum_{n=1}^{+\infty } \frac{(t-s)^{n}}{n!} \lambda _{\beta }^{n-1} \sum_{k=0}^{n-1} \biggl( -\frac{1}{\tau _{a} \lambda _{\beta }} \biggr)^{k} \\ &=\sum_{n=1}^{+\infty } \frac{(t-s)^{n}}{n!} \lambda _{\beta }^{n-1} \biggl( \frac{1- ( -\frac{1}{\tau _{a} \lambda _{\beta }} )^{n}}{1+\frac{1}{\tau _{a} \lambda _{\beta }}} \biggr) \\ &=\frac{1}{\lambda _{\beta }} \frac{1}{1+\frac{1}{\tau _{a} \lambda _{\beta }}} \Biggl[ \sum _{n=1}^{+ \infty } \frac{ ( \lambda _{\beta } (t-s) )^{n}}{n!} - \sum _{n=1}^{+\infty } \frac{(-\frac{t-s}{\tau _{a}})^{n}}{n!} \Biggr] \\ &= \frac{1}{\lambda _{\beta }+\frac{1}{\tau _{a} }} \bigl[ e^{ \lambda _{\beta } (t-s)} - e^{-\frac{t-s}{\tau _{a}}} \bigr]. \end{aligned}$$

The time integral is computed the same way as Eq. (68):

$$\begin{aligned} &\int _{t_{0}}^{t} \sum _{n=1}^{+\infty } \frac{(t-s)^{n}}{n!} \sum _{k=0}^{n-1} \biggl( -\frac{1}{\tau _{a}} \biggr)^{k} \lambda _{\beta }^{n-1-k} {\mathcal {F}}_{\gamma }(s) \,ds \\ &\quad = \frac{1}{\lambda _{\beta }+\frac{1}{\tau _{a} }} \biggl[ \int _{t_{0}}^{t} e^{\lambda _{\beta } (t-s)} {\mathcal {F}}_{\gamma }(s) \,ds - \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} {\mathcal {F}}_{\gamma }(s) \,ds \biggr] \\ &\quad =\frac{1}{\lambda _{\beta }+\frac{1}{\tau _{a} }} \biggl[ \biggl( \frac{1}{\tau _{B}} + \lambda _{\beta } \biggr) \int _{t_{0}}^{t} e^{ \lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds - \biggl( \frac{1}{\tau _{B}} - \frac{1}{\tau _{a}} \biggr) \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} V_{\gamma _{\mathrm{drive}}}(s) \,ds \biggr]. \end{aligned}$$

Similar to Eqs. (38), (39) in the text, we introduce

$$\begin{aligned} &{\mathcal {E}}^{B}_{a,\alpha }(t)= h_{B} \sum _{\beta =1}^{2N} \sum_{ \gamma =1}^{N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \frac{1}{\lambda _{\beta }+\frac{1}{\tau _{a} }} \begin{bmatrix} ( \frac{1}{\tau _{B}} + \lambda _{\beta } ) \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds \\ {}- ( \frac{1}{\tau _{B}} - \frac{1}{\tau _{a}} ) \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} V_{\gamma _{\mathrm{drive}}}(s) \,ds \end{bmatrix}, \\ &\quad \alpha =2N+1 , \ldots, 3N, \end{aligned}$$

corresponding to the action of BCells and ACells on the activity of BCells via the network effect. Let us consider in more detail the second term. From (66), \(\sum_{\beta =1}^{2N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{ \beta \gamma } = \delta _{\alpha -2N \gamma }\), thus

$$\begin{aligned} \sum_{\beta =1}^{2N} \sum _{\gamma =1}^{N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{\beta \gamma } \int _{t_{0}}^{t} e^{- \frac{t-s}{\tau _{a}}} V_{\gamma _{\mathrm{drive}}}(s) \,ds &= \sum_{ \gamma =1}^{N} \Biggl( \int _{t_{0}}^{t} e^{- \frac{t-s}{\tau _{a}}} V_{\gamma _{\mathrm{drive}}}(s) \,ds \underbrace{\sum_{\beta =1}^{2N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{\beta \gamma }}_{\delta _{ \alpha -2N \gamma }} \Biggr) \\ &= \int _{t_{0}}^{t} e^{-\frac{t-s}{\tau _{a}}} V_{\alpha -2N_{\mathrm{drive}}}(s) \,ds \equiv A^{0}_{\alpha -2N}(t) \end{aligned}$$

(Eq. (41) in the text).

This finally leads to ((40) in the text)

$$\begin{aligned} &{\mathcal {E}}^{B}_{a,\alpha }(t)= h_{B} \Biggl( \sum _{\beta =1}^{2N} \sum _{\gamma =1}^{N} {\mathcal {P}}_{\alpha -2N \beta } {\mathcal {P}}^{-1}_{ \beta \gamma } \frac{\lambda _{\beta }+\frac{1}{\tau _{B}}}{\lambda _{\beta }+\frac{1}{\tau _{a}}} \int _{t_{0}}^{t} e^{\lambda _{\beta }(t-s)} V_{\gamma _{\mathrm{drive}}}(s) \,ds + \frac{-\frac{1}{\tau _{B}} + \frac{1}{\tau _{a}}}{\lambda _{\beta }+\frac{1}{\tau _{a}}} A^{0}_{\alpha -2N}(t) \Biggr), \\ &\quad \alpha =2N+1 , \ldots, 3N. \end{aligned}$$

D.2 Spectrum of \({\mathcal {L}}\) and stability of dynamical system (34)

Here, we assume that a BCell connects only one ACell, with a weight \(w^{+}\) uniform for all BCells, so that \(W^{{B}}_{{A}} = w^{+} I_{N,N}\), \(w^{+}>0\). We also assume that ACells connect to BCells with a connectivity matrix \({\mathcal {W}}\), not necessarily symmetric, with a uniform weight \(- w^{-}\), \(w^{-}>0\), so that \(W^{{A}}_{{B}} = -w^{-} {\mathcal {W}}\). We have shown in the previous section that the 2N first eigenvalues and eigenvectors of \({\mathcal {L}}\) are given by the 2N eigenvalues and eigenvectors of \({\mathcal {M}}\), which reads now as follows:

$$ {\mathcal {M}}= \begin{pmatrix} -\frac{I_{N,N}}{\tau _{B}} &-w^{-} {\mathcal {W}}\\ w^{+} I_{N,N} & -\frac{I_{N,N}}{\tau _{A}} \end{pmatrix}. $$

We now show that this specific structure allows us to compute the spectrum of \({\mathcal {M}}\) in terms of the spectrum of \({\mathcal {W}}\).

D.2.1 Eigenvalues and eigenvectors of \({\mathcal {M}}\)

We denote by \(\kappa _{n}, n=1 , \ldots, N\), the eigenvalues of \({\mathcal {W}}\) ordered as \(\vert \kappa _{1} \vert \leq \vert \kappa _{2} \vert \leq \cdots \leq \vert \kappa _{n} \vert \), and \(\vec {\psi }_{n}\) is the corresponding eigenvector. We normalise \(\vec {\psi }_{n}\) so that \(\vec {\psi }_{n}^{\dagger }.\vec {\psi }_{n}=1\), where † is the adjoint. (Note that, as \({\mathcal {W}}\) is not symmetric in general, eigenvectors are complex.)

We shall neglect the case where, simultaneously, \(\frac{1}{\tau }=0\) and \(\kappa _{n}=0\) for some n.


For each n, there is a pair of eigenvalues \(\lambda _{n}^{\pm }\) and eigenvectors ϕ n ± = c n ± ( ψ n ρ n ± ψ n ) of \({\mathcal {M}}\) with \(c_{n}^{\pm }=\frac{1}{\sqrt{1+ ( \rho _{n}^{\pm } )^{2}}}\) (normalisation factor) and

$$ \rho _{n}^{\pm }= \textstyle\begin{cases} \frac{1}{2 \tau w^{-} \kappa _{n} } ( 1 \pm \sqrt{1- 4 \mu \kappa _{n} } ),& \kappa _{n} \neq 0, \frac{1}{\tau } \neq 0; \\ w^{+} \tau , & \kappa _{n} =0,\frac{1}{\tau } \neq 0; \\ \pm \sqrt{- \frac{w^{+}}{w^{-}} \frac{1}{\kappa _{n}}},& \frac{1}{\tau }=0, \end{cases} $$


$$ \frac{1}{\tau }= \biggl( \frac{1}{\tau _{A}} - \frac{1}{\tau _{B}} \biggr) $$


$$ \frac{1}{\tau _{AB}}= \biggl( \frac{1}{\tau _{A}} + \frac{1}{\tau _{B}} \biggr). $$

Eigenvalues are given by

$$ \lambda _{n}^{\pm }= \textstyle\begin{cases} -\frac{1}{2 \tau _{AB}} \mp \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} },& \frac{1}{\tau } \neq 0; \\ -\frac{1}{\tau _{A}} \mp \sqrt{-w^{-} w^{+} \kappa _{n}}, & \frac{1}{\tau } = 0, \end{cases} $$


$$ \mu = w^{-} w^{+} \tau ^{2} \geq 0. $$

As a consequence, in addition to the N last eigenvalues \(-\frac{1}{\tau _{A}}\), \({\mathcal {L}}\) admits 2N eigenvalues given by (71), while the 2N first columns of the matrix \({\mathcal {P}}\) (eigenvectors of \({\mathcal {L}}\)) are as follows:

$$ \vec {{\mathcal {P}}}_{\beta }=\frac{1}{\sqrt{1+ ( \rho _{n}^{-} )^{2}}} \begin{pmatrix} \vec {\psi }_{n} \\ \rho _{n}^{-} \vec {\psi }_{n} \\ \vec {0}_{N}\end{pmatrix}; \qquad \vec {{\mathcal {P}}}_{\beta +N}= \frac{1}{\sqrt{1+ ( \rho _{n}^{+} )^{2}}} \begin{pmatrix} \vec {\psi }_{n} \\ \rho _{n}^{+} \vec {\psi }_{n} \\ \vec {0}_{N}\end{pmatrix},\quad \beta =n=1 , \ldots, N. $$

For the N last eigenvectors \(\vec {{\mathcal {P}}}_{\beta }=\vec {e}_{\beta }\), \(\beta =2N+1 , \ldots, 3N\).


The structure of these eigenvectors is quite instructive. Indeed, the factors \(\rho _{n}^{\pm }\) control the projection of the eigenvectors \(\vec {{\mathcal {P}}}_{\beta }\) on the space of ACells, thereby tuning the influence of ACells via lateral connectivity.


We use here the generic notation \(\lambda _{\beta }\), \(\vec {\phi }_{\beta }\), \(\beta =1 , \ldots, 2N\), for the eigenvalues and associated eigenvectors of \({\mathcal {M}}\). If we assume that \(\vec {\phi }_{\beta }\) is of the form ϕ β =( ψ n ρ ψ n ) for some n, then we have

$$ {\mathcal {M}}.\vec {\phi }_{\beta }= \begin{pmatrix} -\frac{I}{\tau _{B}} &-w^{-} {\mathcal {W}}\\ w^{+} I & -\frac{I}{\tau _{A}} \end{pmatrix}. \begin{pmatrix} \vec {\psi }_{n} \\ \rho \vec {\psi }_{n}\end{pmatrix} = \begin{pmatrix} ( -\frac{1}{\tau _{B}} -w^{-} \rho \kappa _{n} ). \vec {\psi }_{n} \\ ( -\frac{\rho }{\tau _{A}} + w^{+} ).\vec {\psi }_{n}\end{pmatrix} = \lambda _{\beta } \begin{pmatrix} \vec {\psi }_{n} \\ \rho \vec {\psi }_{n}\end{pmatrix}, $$

which gives

$$ \textstyle\begin{cases} ( -\frac{1}{\tau _{B}} -w^{-} \rho \kappa _{n} ) = \lambda _{\beta },\\ ( -\frac{\rho }{\tau _{A}} + w^{+} ) = \lambda _{\beta }\rho , \end{cases} $$

leading to

$$ w^{-} \kappa _{n} \rho ^{2} - \frac{1}{\tau } \rho + w^{+} =0, $$


$$ \frac{1}{\tau }=\frac{1}{\tau _{A}} - \frac{1}{\tau _{B}}. $$

This gives, if \(\kappa _{n} \neq 0\) and \(\frac{1}{\tau } \neq 0\),

$$ \rho _{n}^{\pm }=\frac{1}{2 \tau w^{-} \kappa _{n} } ( 1 \pm \sqrt{1- 4 \mu \kappa _{n} } ), $$


$$ \mu = w^{-} w^{+} \tau ^{2} \geq 0. $$

Thus, for each n, there are two eigenvalues:

$$ \lambda _{n}^{\pm }= -\frac{1}{2 \tau _{AB}} \mp \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} }, $$


$$ \frac{1}{\tau _{AB}}=\frac{1}{\tau _{A}} + \frac{1}{\tau _{B}}. $$

Note that \(\frac{1}{\tau _{AB}} \geq \frac{1}{\tau }\).

If \(\kappa _{n}=0\), \(\frac{1}{\tau } \neq 0\), \(\rho _{n}^{\pm }=w^{+} \tau \),then

$$ \lambda _{\beta }=-\frac{1}{\tau _{B}} -w^{-} \rho \kappa _{n}. $$

Finally, if \(\kappa _{n} \neq 0\), \(\frac{1}{\tau }=0\), (\(\tau _{A}= \tau _{B}\)), \(\rho _{n}^{\pm }=-\frac{w^{+}}{w^{-}} \frac{1}{\kappa _{n}}\) and \(\lambda _{n}^{\pm }=-\frac{1}{\tau _{B}} \pm \sqrt{-w^{-} w^{+} \kappa _{n}} \). If \(\kappa _{n}=0\), \(\frac{1}{\tau }=0\), there is no solution for ρ. □


When \(\mu =0\), \({\mathcal {M}}\) is diagonal: the N first eigenvalues are \(-\frac{1}{\tau _{B}}\), the N next eigenvalues are \(-\frac{1}{\tau _{A}}\). We have in this case \(\lambda _{n}^{+} = -\frac{1}{\tau _{B}} \) and \(\lambda _{n}^{-} = -\frac{1}{\tau _{A}}\). Therefore, in order to be coherent with this diagonal form of \({\mathcal {L}}\) when \(\mu =0\), we order eigenvalues and eigenvectors of \({\mathcal {M}}\) such that the N first eigenvalues are \(\lambda _{\beta }=\lambda _{n}^{+}\), \(\beta =1 , \ldots, N\), and the N next are \(\lambda _{\beta }=\lambda _{n}^{-}\), \(\beta =N+1 , \ldots, 2N\).

D.2.2 Stability of eigenmodes

Stability of eigenmodes when \({\mathcal {W}}\) is symmetric

If \({\mathcal {W}}\) is symmetric, its eigenvalues \(\kappa _{n}\) are real, but \(\lambda _{\beta }\), \(\beta =1 , \ldots, 2N\), can be real or complex, depending on \(\kappa _{n}\), as μ is positive.

We have four cases:

\(\kappa _{n} < 0\). Then, from (71), \(\lambda _{\beta }\)s are real, and there are two cases. If \(\frac{1}{\tau }>0\), the eigenvalues \(\lambda _{\beta }\), \(\beta =1 , \ldots, N\), can have a positive real part (unstable), while \(\lambda _{\beta }\), \(\beta =N+1 , \ldots, 2N\), has always a negative real part (stable); for \(\frac{1}{\tau } <0\), the situation is inverted. In both cases, the eigenvalue \(\lambda _{\beta }\) has a positive real part if

$$ \mu > - \frac{1}{\kappa _{n}} \frac{\tau _{A} \tau _{B}}{ ( \tau _{B} -\tau _{A} )^{2}} \equiv \mu _{n,u}, $$

which reads, using the definition of μ, as follows:

$$ w^{-} w^{+} > - \frac{1}{\tau _{A} \tau _{B}} \frac{1}{\kappa _{n}}. $$

Thus, \(\tau _{A}\), \(\tau _{B}\) play a symmetric role. If \(\frac{1}{\tau }=0\) (\(\tau _{A}=\tau _{B}\)), all eigenvalues are real. Eigenvalues \(\lambda _{n}^{-}\) are all stable. The eigenvalue \(\lambda _{n}^{+}\) becomes unstable if \(w^{-} w^{+} > - \frac{1}{\tau _{A}^{2}} \frac{1}{\kappa _{n}}\), corresponding to (73).


There are two cases.

\(\frac{1}{\tau } > 0 \Leftrightarrow \tau _{A} < \tau _{B}\).

$$\begin{aligned}& \lambda _{\beta }=-\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} } > 0 \\& \quad \Leftrightarrow \quad \pm \sqrt{1- 4 \mu \kappa _{n} } > \frac{\tau }{\tau _{AB}}. \end{aligned}$$

Only + is possible (because τ and \(\tau _{AB}\) are positive). This gives

$$\begin{aligned}& 1- 4 \mu \kappa _{n} > \frac{\tau ^{2}}{\tau _{AB}^{2}}, \\& 1- \frac{\tau ^{2}}{\tau _{AB}^{2}} = - 4 \frac{\tau _{A} \tau _{B}}{ ( \tau _{B} -\tau _{A} )^{2}}> 4 \mu \kappa _{n}, \end{aligned}$$

which is possible because \(\kappa _{n} < 0\). Thus, \(\lambda _{\beta }\) is unstable if

$$ \mu > - \frac{1}{\kappa _{n}} \frac{\tau _{A} \tau _{B}}{ ( \tau _{B} -\tau _{A} )^{2}} \equiv \mu _{n,u}. $$

\(\frac{1}{\tau } < 0 \Leftrightarrow \tau _{A} > \tau _{B}\).

$$\begin{aligned}& -\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} } > 0 \\& \quad \Leftrightarrow\quad \pm \sqrt{1- 4 \mu \kappa _{n} } < \frac{\tau }{\tau _{AB}}. \end{aligned}$$

Only − is possible (because \(\tau <0\)). This gives

$$ 1- 4 \mu \kappa _{n} > \frac{\tau ^{2}}{\tau _{AB}^{2}}, $$

the same condition as in the previous item.

– If \(\frac{1}{\tau }=0\), \(\lambda _{\beta }=-\frac{1}{\tau _{A}} \mp \sqrt{-w^{-} w^{+} \kappa _{n}}\) so that eigenvalues are real. The eigenvalue with the minus sign (\(\lambda _{n}^{-}\)) are all stable. The eigenvalue \(\lambda _{n}^{+}\) becomes unstable if

$$ w^{-} w^{+} > - \frac{1}{\tau _{A}^{2}} \frac{1}{\kappa _{n}}. $$


\(\kappa _{n} > 0\). Then \(\lambda _{\beta }\), \(\beta =1 , \ldots, 2N\), are real or complex. If \(\frac{1}{\tau } \neq 0\), they are complex if

$$ \mu > \frac{1}{4 \kappa _{n}} \equiv \mu _{n,c}. $$

In this case the real part is \(-\frac{1}{2 \tau _{AB}}\), the imaginary part is \(\pm \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} }\), and all eigenvalues are stable. If \(\mu \leq \mu _{n,c}\), eigenvalues \(\lambda _{\beta }\) are real and all modes are stable as well. Indeed:

  • If \(\frac{1}{\tau }=0\), all eigenvalues are equal to \(-\frac{1}{2 \tau _{AB}}\), hence are stable.

  • If \(\frac{1}{\tau } > 0 \Leftrightarrow \tau _{A} < \tau _{B}\).

    $$\begin{aligned}& -\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} } > 0 \\& \quad \Leftrightarrow\quad \pm \sqrt{1- 4 \mu \kappa _{n} } > \frac{\tau }{\tau _{AB}}, \end{aligned}$$

    which is not possible because \(\frac{\tau }{\tau _{AB}} > 1\), whereas \(\sqrt{1- 4 \mu \kappa _{n} } < 1\).

  • If \(\frac{1}{\tau } < 0 \Leftrightarrow \tau _{A} > \tau _{B}\).

    $$\begin{aligned}& -\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \sqrt{1- 4 \mu \kappa _{n} } > 0 \\& \quad \Leftrightarrow \quad \pm \sqrt{1- 4 \mu \kappa _{n} } < \frac{\tau }{\tau _{AB}}. \end{aligned}$$

    Only − is possible because \(\tau < 0\).

    $$ 1- 4 \mu \kappa _{n} > \frac{\tau ^{2}}{\tau _{AB}^{2}}, $$

    which is not possible because \(\frac{\tau }{\tau _{AB}} > 1\), whereas \(\sqrt{1- 4 \mu \kappa _{n} } < 1\).

Stability of eigenmodes when \({\mathcal {W}}\) is asymmetric

If \({\mathcal {W}}\) is asymmetric, eigenvalues \(\kappa _{n}\) are complex, \(\kappa _{n}=\kappa _{n,r} + i \kappa _{n,i}\). We write \(\lambda _{\beta }= \lambda _{\beta ,r} + i \lambda _{\beta ,i}\), \(\beta =1 , \ldots, 2N\), with

$$ \textstyle\begin{cases} \lambda _{\beta ,r} = -\frac{1}{2 \tau _{AB}} \pm \frac{1}{2 \tau } \frac{1}{\sqrt{2}} \sqrt{a_{n}+u_{n}} ; \\ \lambda _{\beta ,i} = \pm \frac{1}{2 \tau } \frac{1}{\sqrt{2}} \sqrt{u_{n}-a_{n}}, \end{cases} $$

where \(a_{n}=1- 4 \mu \kappa _{n,r}\) and \(u_{n}=\sqrt{ ( 1- 4 \mu \kappa _{n,r} )^{2} + 16 \mu ^{2} \kappa _{n,i}^{2}} =\sqrt{1-8 \mu \kappa _{n,r}^{2} + 16 \mu ^{2} \vert \kappa _{n} \vert ^{2}}\). Note that we recover the real case when \(\kappa _{n,i}=0\) by setting \(u_{n}=a_{n}\).

Instability occurs if

$$ a_{n}+u_{n} > 2 \frac{\tau ^{2}}{\tau _{AB}^{2}}, $$

a condition depending on \(\kappa _{n,r}\) and \(\kappa _{n,i}\).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Souihel, S., Cessac, B. On the potential role of lateral connectivity in retinal anticipation. J. Math. Neurosc. 11, 3 (2021).

Download citation


  • Retina
  • Motion anticipation
  • Lateral connectivity
  • 2D