# Derived Patterns in Binocular Rivalry Networks

- Casey O Diekman
^{1}, - Martin Golubitsky
^{1}Email author and - Yunjiao Wang
^{2}

**3**:6

https://doi.org/10.1186/2190-8567-3-6

© C.O. Diekman et al.; licensee Springer 2013

**Received: **15 January 2013

**Accepted: **22 April 2013

**Published: **8 May 2013

## Abstract

Binocular rivalry is the alternation in visual perception that can occur when the two eyes are presented with different images. Wilson proposed a class of neuronal network models that generalize rivalry to multiple competing patterns. The networks are assumed to have learned several patterns, and rivalry is identified with time periodic states that have periods of dominance of different patterns. Here, we show that these networks can also support patterns that were not learned, which we call *derived*. This is important because there is evidence for perception of derived patterns in the binocular rivalry experiments of Kovács, Papathomas, Yang, and Fehér. We construct modified Wilson networks for these experiments and use symmetry breaking to make predictions regarding states that a subject might perceive. Specifically, we modify the networks to include lateral coupling, which is inspired by the known structure of the primary visual cortex. The modified network models make expected the surprising outcomes observed in these experiments.

### Keywords

Binocular rivalry Interocular grouping Coupled systems Symmetry Hopf bifurcation## 1 Introduction

Wilson [1] argues that generalizations of binocular rivalry can provide insight into conscious brain processes and proposes a neural network model for higher level decision making. Here, we demonstrate that the Wilson network model is also useful for understanding the phenomenon of binocular rivalry itself by analyzing several rivalry experiments discussed in Kovács, Papathomas, Yang, and Fehér [2]. Mathematical analysis of these network structures (based on the theory of coupled cell systems and symmetry in Golubitsky and Stewart [3–5]) leads to predictions that are directly testable via standard psychophysics experiments.

*illusions*due to insufficient information and

*rivalry*due to inconsistent information. One of the standard examples of illusion is given by the

*Necker Cube*shown in Fig. 1(a). There are two percepts that are commonly

*perceived*when viewing the Necker cube picture: one with the yellow face at the back and one with it on top. There is not enough information in the picture to fix the percept and this ambiguity leads to the two percepts alternating randomly.

In rivalry, the two eyes of the subject are presented with two different images such as the ones in Fig. 1(b) [6]. Typically, the subject reports perceiving the two images alternating in periods of dominance. There are two main types of mathematical models for rivalry (Laing et al. [8]). In the first type, rivalry is treated as a time periodic state (perhaps with added noise), and in the second the oscillation is obtained by noise driven jumping between stable equilibria in a bistable system (Moreno-Bote et al. [9]).

*a*and

*b*corresponding to the two percepts with a system of differential equations of the form

*a*and the vector ${X}_{b}$ consists of the state variables of unit

*b*. The equations in (1) are those associated with the two-node network in Fig. 2. It is further assumed that one of the variables ${x}_{\ast}^{E}$ is an

*activity*variable and that ${x}_{a}^{E}>{x}_{b}^{E}$ implies that percept

*a*is dominant. Similarly, percept

*b*is dominant if ${x}_{b}^{E}>{x}_{a}^{E}$. In these models equilibria where ${X}_{a}\ne {X}_{b}$ are

*winner-take-all*states that correspond to one percept being dominant.

States where ${X}_{a}={X}_{b}$ are called *fusion* states. Fusion states are typically interpreted as states where a subject perceives both of the images superimposed [18–20]. One might wonder why fusion states would be of interest in mathematical models, since it seems unlikely that the two values ${X}_{a}$ and ${X}_{b}$ would be equal. However, because of the symmetry in model equations such as (1), the subspace ${X}_{a}={X}_{b}$ is flow-invariant, and fusion equilibria are structurally stable.

contains the critical eigenvectors at Hopf bifurcation. Symmetry implies that generically the critical eigenvectors are either in one subspace or the other [5]. Symmetry-preserving Hopf bifurcations (with critical eigenvectors in ${V}^{+}$) lead to periodic solutions satisfying ${X}_{b}(t)={X}_{a}(t)$, that is, to oscillation of fusion states. These states are perhaps uninteresting from the point of view of rivalry. Symmetry-breaking Hopf bifurcations (with critical eigenvectors in ${V}^{-}$) lead to periodic solutions satisfying ${X}_{b}(t)={X}_{a}(t+\frac{T}{2})$, where *T* is the period. Such solutions lead to periodic alternation between percepts *a* and *b*; that is, to rivalrous solutions.

*interocular grouping*, had been documented previously (Diaz-Coneja [22] and Alais et al. [23]), and has since been reproduced using a variety of rivalry stimuli (Papathomas et al. [24]). Of the four rivalry experiments described in [2], only the first can be understood by the simple two-node network in Fig. 2. We will show that the other three experiments can be modeled using a variant of Wilson networks for generalized rivalry. In their first experiment, subjects are presented the

*monkey*and

*text*images in Fig. 3(a)) and they report rivalry between the two images. In their second experiment, subjects are presented the scrambled images combining parts of the monkey’s face and parts of the written text (see Fig. 3(b)). The subjects report that, in addition to the expected rivalry between the original scrambled images, for part of the time they perceive alternations between unscrambled images of

*monkey*only and

*text*only such as those in Fig. 3(a). We show that the surprising outcome of this experiment is not surprising when formulated as a simple Wilson network.

*colored dot*rivalry experiments that are analogous to the conventional and scrambled

*monkey-text*experiments. In the conventional

*colored dot*experiment, the subjects were shown the single-color images in Fig. 4(a). Besides reporting rivalry between the two single-color figures, the subjects unexpectedly report images with dots of scrambled colors, such as those in Fig. 4(b). The corresponding result for the conventional

*monkey*-

*text*experiment seems highly unlikely.

In the scrambled *colored dot* experiment, [2] presented the subjects with the images in Fig. 4(b). Given the results of the scrambled *monkey*-*text* experiment, it is not surprising that subjects reported rivalry between these two scrambled color images and also rivalry between single color images such as shown in Fig. 4(a). However, the analogy of the scrambled *colored dot* experiment with the scrambled *monkey*-*text* experiment is not quite so straightforward, since, as we will see in Sect. 2, the proposed Wilson networks for the two experiments have different symmetries.

Tong, Meng, and Blake [25] give a simplified description of the *colored dot* experiments of [2] by using a square array of four dots rather than a rectangular array of 24 dots. Our analysis is based on the simplified $2\times 2$ versions of these experiments, but extends to the $6\times 4$ case.

- (1)
The simplest Wilson model for the conventional

*monkey*-*text*experiment is the standard two-node rivalry model in Fig. 2 and leads only to rivalry between the whole*monkey*and the whole*text*images. - (2)
The simplest Wilson model for the scrambled

*monkey*-*text*experiment leads naturally to rivalry solutions between the scrambled images and also between the reconstructed images. - (3)
The modified Wilson model for the scrambled

*colored dot*experiment also leads naturally to rivalry solutions between both the scrambled and the reconstructed images. - (4)
The Wilson model for the conventional

*colored dot*experiment leads naturally to rivalry between scrambled images as well as to between the conventional images. This is in contrast to the conventional*monkey*-*text*experiment.

We will see that our analysis also leads to possible additional rivalry states in the *colored dot* experiments and these states may be thought of as predictions made by our approach.

The remainder of the paper is organized as follows. We describe Wilson networks in Sect. 2. Our discussion differs from [1] in two important ways. First, we observe that patterns exist in rivalrous solutions for the Wilson networks that are not learned patterns. We call these additional patterns *derived*; the derived patterns are the ones that correspond to the unexpected results in the Kovács et al. experiments. Second, we introduce an additional type of coupling, *lateral coupling*, based on models of hypercolumns in the primary visual cortex literature (Bressloff et al. [26]).

Deciding on the exact form of a Wilson network model for a given experiment is not at this stage algorithmic. Moreover, there are many choices for the exact form of the network equations once the network is fixed. If we take the strict form of the Wilson models (where all nodes, all excitatory couplings, and all inhibitory couplings are identical) and we assume that the associated differential equations are highly idealized rate models (as Wilson does), then the derived patterns in the *monkey*-*text* experiment are always unstable. However, stability is a model-dependent property of solutions and simple changes to the network or to the model equations can lead to stable derived patterns.

There are many ways to modify Wilson networks to address the stability issue and we have chosen one here, namely, we have added lateral coupling to the network. Lateral coupling will also enable us to distinguish the Wilson network models for the two *colored dot* experiments by a change in network symmetry. The most important message in this paper is the observation that Wilson networks have derived patterns that can be classified using methods from the theory of symmetry-breaking Hopf bifurcations and that these derived patterns appear to correspond to the surprising perceived states found in psychophysics experiments. More discussion is needed to arrive at an algorithmic description of which (modified) Wilson network to use when modeling a given experiment.

Section 3 gives a brief description of equivariant Hopf bifurcation (see Golubitsky et al. [5]) and shows how to find periodic solutions in Wilson networks modeling the four rivalry experiments that correspond to the rivalries reported in these experiments. Our Hopf bifurcation analysis of the *colored dot* experiments is based on the four-dot version in [25] and on Hopf bifurcation in the presence of ${\mathbf{S}}_{4}$ symmetry (analyzed in Stewart [27]) and of ${\mathbf{D}}_{4}$ symmetry (as in Golubitsky et al. [5]). Note that ${\mathbf{S}}_{4}$ is the group of permutations on four letters and ${\mathbf{D}}_{4}$ is the symmetry group of a square.

Section 4 summarizes the calculations needed to compute stability for rivalrous solutions between both learned and derived patterns in the scrambled *monkey*-*text* networks. In this section, we use standard rate models to compute stability and to illustrate the effect of having lateral coupling.

We end this Introduction by emphasizing that our approach is mainly a *model independent* one advocated in Golubitsky and Stewart [4]. We use network structure and symmetry to create a menu of possible rivalrous solutions, rather than explicitly finding these solutions in a given differential equations model, such as is typically done in the literature [1, 10, 11, 28]. This menu is model independent. Stability, on the other hand, is *model dependent*. Our discussion of stability in Sect. 4 does rely on the choice of specific model equations; here we use the rate models introduced by others.

## 2 Networks

Wilson networks [1] are assumed to have learned several patterns, and rivalry is identified with time-periodic states that have periods of dominance of different patterns. Here, we show that these networks can also support derived patterns in addition to learned patterns.

*pattern*is a choice of a single level in each column. If the network has

*learned*a particular pattern, then there are reciprocal excitatory connections between all nodes in the pattern. See Fig. 5(b). A Wilson network can learn many patterns. When it does, there are reciprocal excitatory connections between nodes in each pattern. In our discussion of rivalry, we assume that the images shown to each eye are the two learned patterns.

Before discussing networks for the rivalry experiments in [2], we consider a variant of Wilson networks that introduces a third type of coupling. This coupling is inspired by the hypercolumn structure of the primary visual cortex (V1). Neurons in V1 are known to be sensitive to orientations of line segments located in small regions of the visual field. Moreover, V1 consists of hypercolumns, which are small regions of V1 that correspond to specific regions of the visual field. Optical imaging of macaque V1 suggests that in each hypercolumn there are neurons that are sensitive to each orientation, and that neurons within a hypercolumn are all-to-all coupled (Blasdel [29]). This coupling is usually assumed to be inhibitory. Thus, when considering V1, the columns in the Wilson networks correspond to hypercolumns where the attributes are the direction of a line field at a specified area in the visual field. However, V1 imaging also indicates a second kind of coupling, called *lateral coupling* that connects neurons in neighboring hypercolumns [26, 29]. Moreover, the neurons that are most strongly laterally coupled are those that have the same orientation sensitivity [26, 30], albeit at different points in the visual field. Finally, lateral coupling is usually taken to be excitatory.

With the structure of V1 as inspiration, we define an excitatory lateral coupling in the Wilson networks by connecting those nodes in different columns that correspond to the *same level*. See Fig. 5(c).

*monkey*-

*text*experiment can be modeled by a two-level, two-attribute Wilson network with two learned patterns. To specify the network, we conceptualize the Kovács images in Fig. 3(b) as rectangles divided into two regions: one indicated by white and the other by blue in Fig. 6(a). The first attribute in the network corresponds to the portion of a rectangular image in the white region and the second attribute corresponds to the portion of that rectangular image in the blue region. In the Kovács experiment, the possible levels of each attribute are the portion of the

*monkey*image in the associated region and the portion of the

*text*image in that region.

This network has four nodes, where ${X}_{ij}$ represents level *i* of attribute *j* as shown in Fig. 6(b). More specifically, ${X}_{11}$ represents *monkey* in the white region in Fig. 6(a) and ${X}_{21}$ represents *text* in the white region. Similarly, ${X}_{12}$ represents *monkey* in the blue region and ${X}_{22}$ represents *text* in the blue region. Thus, there are reciprocal inhibitory connections between nodes ${X}_{11}$ and ${X}_{21}$ and between ${X}_{12}$ and ${X}_{22}$. There are also reciprocal excitatory connections between ${X}_{11}$ and ${X}_{22}$ and between ${X}_{21}$ and ${X}_{12}$ representing the two learned patterns. In this network, the state {${x}_{11}^{E}>{x}_{21}^{E}$ and ${x}_{22}^{E}>{x}_{12}^{E}$} corresponds to the scrambled image in Fig. 3(b)(left), whereas the state {${x}_{21}^{E}>{x}_{11}^{E}$ and ${x}_{12}^{E}>{x}_{22}^{E}$} corresponds to the scrambled image in Fig. 3(b)(right). Importantly, the network also supports two derived pattern states: {${x}_{11}^{E}>{x}_{21}^{E}$ and ${x}_{12}^{E}>{x}_{22}^{E}$}, which corresponds to the *monkey* only image in Fig. 3(a)(left), and {${x}_{21}^{E}>{x}_{11}^{E}$ and ${x}_{22}^{E}>{x}_{12}^{E}$}, which corresponds to the *text* only image in Fig. 3(a)(right).

The symmetries of the networks in Fig. 6(b) and 6(c) are the same. Hence, for this experiment, the addition of lateral coupling does not change the expected types of periodic solutions that can be obtained through symmetry-breaking Hopf bifurcation. However, lateral coupling does change the symmetry of the Wilson network (and hence the expected types of solutions) corresponding to the scrambled *colored dot* experiment as shown in Fig. 9.

*colored dot*experiments in [2], where each eye is presented with a square symmetric pattern of four dots. So, in the Tong version of the conventional

*colored dot*experiment, one learned pattern has four red dots and the other has four green dots, as shown in Fig. 8(a). To our knowledge, this proposed rivalry experiment has not been performed.

We model this experiment by a Wilson network consisting of four attribute columns, where each attribute refers to the position of one of the dots (upper left, lower left, lower right, upper right) and has two levels (red and green). The eight-node Wilson network with two learned patterns is shown in Fig. 8(b). Adding lateral coupling to this network does not change the symmetry since the lateral coupling and the learned pattern coupling are coincident in this model.

*colored dot*experiment is shown in Fig. 9(b). This network is presented so that the learned pattern couplings are in horizontal planes; that is, the red and green levels are inverted in the LL and UR attribute columns. Note that if lateral coupling were not included then this network would be isomorphic to Fig. 8(b), and hence have the same symmetry groups. It would follow that we would predict the same solution types for the two

*colored dot*experiments, which is not what is observed.

## 3 Symmetry and Hopf Bifurcation

- (1)
Determine the symmetry group

*Γ*of the network and how*Γ*acts on phase space. - (2)
Determine the irreducible representations of this action of

*Γ*. (Recall that a*representation**V*is an invariant subspace of the action of*Γ*;*V*is*irreducible*if the only invariant subspaces are the trivial subspace {0} and*V*itself.) - (3)
Classify the periodic solutions for each distinct irreducible representation by their spatiotemporal symmetries.

Step 1 is straightforward for the networks we consider. Step 2 is most easily determined by computing the isotypic decomposition of *Γ*. An *isotypic component* consists of the sum of all isomorphic irreducible representations. In general, step 3 is difficult, but it has been worked out in the literature for most standard group actions. Note that if a symmetry $\gamma \in \Gamma $ acts trivially on an isotypic component, then all bifurcating periodic solutions corresponding to this component will be invariant under the symmetry. This remark enables us to identify representations that only lead to oscillating fusion states, which are uninteresting from the rivalry point of view. Specifically, let *ρ* be the symmetry that transposes the two nodes in each column. A solution that is invariant under *ρ* will have activity variables equal in each column and, therefore, be fusion states.

### 3.1 The Scrambled *Monkey*-*Text* Experiment Networks

where in $F(X,Y,Z)$, *X* is the internal state variable of the given node, *Y* is the node connected to *X* with inhibitory coupling, and *Z* is the node connected to *X* with excitatory learned pattern coupling. Note that for general networks ${X}_{ij}\in {\mathbf{R}}^{k}$. However, in the models we use, $k=2$.

*ρ*that swaps rows and the symmetry

*κ*that swaps columns. Specifically,

An important consequence of symmetry is that at a symmetric equilibrium the Jacobian of a symmetric system of differential equations, such as (2), is block diagonalized by the isotypic decomposition of the symmetry group acting on phase space [5].

where $X=({x}^{E},{x}^{H})\in {\mathbf{R}}^{2}$. Note that any point $({X}_{11},{X}_{21},{X}_{12},{X}_{22})\in {\mathbf{R}}^{8}$ that is fixed by *ρ* satisfies ${X}_{11}={X}_{21}$ and ${X}_{12}={X}_{22}$. Since the attribute levels of such states are equal, these states are fusion states and are so labeled in (4). It also follows from the theory of Hopf bifurcation with symmetry and from (4) that Eq. (2) have four possible types of Hopf bifurcation from a fusion state *X* where all ${X}_{ij}$ are equal. One type of bifurcation leads to rivalry between learned patterns, a second type leads to rivalry between derived patterns, and as noted the remaining two types (where *ρ* fixes all points in the isotypic component) lead to rivalry between fusion states.

### 3.2 The Conventional *Colored Dot* Network

The Wilson network in Fig. 8(b) has ${\mathbf{S}}_{4}\times {\mathbf{Z}}_{2}(\rho )$ symmetry, where ${\mathbf{S}}_{4}$ is the permutation group of the four attribute columns and ${\mathbf{Z}}_{2}(\rho )$ interchanges the upper and lower nodes in each column. The rivalry predictions from this network require using the theory of Hopf bifurcation in the presence of ${\mathbf{S}}_{4}$ symmetry (Stewart [27] and Dias and Rodrigues [31]).

*Γ*acting on ${\mathbf{R}}^{8}={\mathbf{R}}^{4}\oplus {\mathbf{R}}^{4}$ are

The decomposition (5) is the analog for the conventional *colored dot* network of the decomposition (4) for the scrambled *monkey*-*text* network. Note that *ρ* acts trivially in the plus representations and as multiplication by −1 in the minus representations. All solutions bifurcating from a plus representation are invariant under *ρ*, and hence are fusion states, since invariance under *ρ* implies that the entries in each attribute column are equal.

*T*-periodic. Hopf bifurcation based on ${V}_{1}^{-}$ leads to solutions of the form of ${\Sigma}_{0}$ in Table 1, that is, to rivalry between the two learned patterns in Fig. 8(a).

Isotropy subgroups of periodic solutions from ${\mathbf{S}}_{4}\times {\mathbf{Z}}_{2}(\rho )$ symmetry. We use the notation ${e}_{\theta}(t)=e(t+\theta T)$ where $e(t)$ is *T*-periodic. Moreover, the frequency of *u* is three times the frequency of *a*, $v\approx -3a$, and the frequency of *c* is twice the frequency of *a*

| Pattern of oscillation | |
---|---|---|

${\Sigma}_{0}$ | $\left(\begin{array}{cccc}{a}_{0}& {a}_{0}& {a}_{0}& {a}_{0}\\ {a}_{1/2}& {a}_{1/2}& {a}_{1/2}& {a}_{1/2}\end{array}\right)$ | Figure 8(a) |

${\Sigma}_{1}$ | $\left(\begin{array}{cccc}{a}_{0}& {a}_{1/2}& {a}_{1/2}& {a}_{0}\\ {a}_{1/2}& {a}_{0}& {a}_{0}& {a}_{1/2}\end{array}\right)$ | |

${\Sigma}_{2}$ | $\left(\begin{array}{cccc}c& {a}_{0}& {a}_{1/2}& c\\ c& {a}_{1/2}& {a}_{0}& c\end{array}\right)$ | Fusion |

${\Sigma}_{3}$ | $\left(\begin{array}{cccc}{a}_{0}& {a}_{1/4}& {a}_{2/4}& {a}_{3/4}\\ {a}_{2/4}& {a}_{3/4}& {a}_{0}& {a}_{1/4}\end{array}\right)$ | Figure 11 |

${\Sigma}_{4}$ | $\left(\begin{array}{cccc}{a}_{0}& {a}_{2/6}& {a}_{4/6}& {u}_{0}\\ {a}_{3/6}& {a}_{5/6}& {a}_{1/6}& {u}_{1/2}\end{array}\right)$ | Complicated transitions |

${\Sigma}_{5}$ | $\left(\begin{array}{cccc}{a}_{0}& {a}_{0}& {a}_{0}& {v}_{0}\\ {a}_{1/2}& {a}_{1/2}& {a}_{1/2}& {v}_{1/2}\end{array}\right)$ | Figure 12 |

*colored dot*experiment with a $2\times 2$ grid of dots. However, the bifurcations using a $6\times 4$ grid of dots, as in the original experiment [2], are completely analogous. Suppose there are

*n*dots. Then there will be

*n*attribute columns with a symmetry group $\Gamma ={\mathbf{S}}_{n}\times {\mathbf{Z}}_{2}(\rho )$. The isotypic decomposition is

*Γ*acting on ${\mathbf{R}}^{2n}={\mathbf{R}}^{n}\oplus {\mathbf{R}}^{n}$ are

Hence, the bifurcation structure for *n* dots is analogous to that of 4 dots; there are two types of bifurcation to fusion states (${V}_{n-1}^{+}$, ${V}_{1}^{+}$), one to rivalry between the learned patterns (${V}_{1}^{-}$), and one to bifurcation to derived patterns (${V}_{n-1}^{-}$). The actual solution types depend on *n* and we will not attempt to interpret the bifurcation results of [27] in the *n* dot case as we have in the four dot case.

### 3.3 The Scrambled *Colored Dot* Network

Next, we return to the Kovács scrambled *colored dot* experiment where the subjects are shown the scrambled colored images in Fig. 4(b). In this case, subjects report perceiving rivalry between the all red dot and all green dot images in Fig. 4(a) for nearly 50 % of the duration of the experiment. This result is difficult to explain with a standard Wilson network. The reason is that when lateral coupling is ignored, this experiment leads to a Wilson network with the same symmetry group as the conventional Kovács dot experiment. It follows that rivalry between the images in Fig. 4(a) is one of several types of possible solutions and it is not clear why this particular solution type should be observed for such a large percentage of the time.

where the overbar indicates terms whose order can be interchanged. The form of (8) emphasizes the fact that there are three different types of coupling: inhibitory, excitatory learned, and excitatory lateral.

*Γ*acting on ${\mathbf{R}}^{16}={\mathbf{R}}^{8}\oplus {\mathbf{R}}^{8}$ are

As in the previous examples, bifurcation with respect to ${W}_{j}^{+}$ leads to fusion states. Bifurcations with respect to ${W}_{0}^{-}$ leads to rivalry between the learned patterns and bifurcation with respect to ${W}_{1}^{-}$ leads to rivalry between the single color dots, as desired. Finally, bifurcation with respect to ${W}_{2}^{-}$ leads to scrambled color patterns similar to (but not the same as) those obtained in the ${\mathbf{S}}_{4}\times {\mathbf{Z}}_{2}(\rho )$ case. From an abstract point of view, the Wilson network with lateral coupling is a much more satisfactory explanation for the existence of the single color rivalry when scrambled dots are presented than is the Wilson network without lateral coupling.

Finally, we note that the discussion in this section generalizes to the scrambled dot experiment with a $6\times 4$ grid of colored dots, as long as the number of green dots and the number of red dots in the scrambled learned patterns are equal, as in Fig. 4(b).

## 4 Stability in Scrambled *Monkey*-*Text* Networks

The classification of possible solution types, as given in Sect. 3, is model independent. We do not need to know the particular equations in order to complete the classification; we just need to know that the equations are *Γ*-equivariant. Given a system of equations, we can prove that solutions of the types that we have classified actually exist only by showing that a Hopf bifurcation that corresponds to the appropriate isotypic component actually occurs. See the equivariant Hopf theorem in [5]. We can also determine whether these solutions are stable, which is model dependent; we need to know the equations.

There are three steps in the calculation of stability. First, we need to determine that there is a fusion equilibrium. Second, we must show that the Hopf bifurcations themselves can be stable. That is, we must find Hopf bifurcation points where the critical eigenvectors of the Jacobian *J* at the fusion equilibium correspond to the given isotypic component and all other eigenvalues of *J* have negative real part. Third, we need to calculate higher order terms in a center manifold reduction to check that the bifurcating solutions are actually stable. Alternatively, we can just simulate the equations for parameter values near a stable Hopf point and see whether we can detect stable solutions. Indeed, this was our approach for the scrambled *monkey-text* model in Sect. 2.

The principal conclusion is that derived pattern rivalry (between unscrambled images) can be stable in this model only if the strength of the lateral coupling is greater than the strength of the learned pattern coupling (see Proposition 3). Note that this cannot happen if lateral coupling is absent. We also show that learned pattern rivalry (between scrambled images) can only be stable when the strength of the learned pattern coupling is greater than the strength of the lateral coupling (see Proposition 2).

### 4.1 Equations for the Scrambled *Monkey*-*Text* Network

where → indicates an excitatory learned pattern connection, ↦ indicates an excitatory lateral connection, and ⇒ indicates an inhibitory connection. Similar rate models are often used in the rivalry literature (Wilson et al. [32, 33]). The parameters are: reciprocal learned pattern excitation between nodes $w>0$, reciprocal lateral excitation $\delta \ge 0$, reciprocal inhibition between nodes $\beta >0$, the external signal strength ${I}_{ij}\ge 0$ to nodes, the strength of reduction of the activity variable by the fatigue variable $g>0$, and the ratio of time scales $\epsilon <1$ on which ${\ast}^{E}$ and ${\ast}^{H}$ evolve. Note that $\delta =0$ for the simulations in [1]. The gain function $\mathcal{G}$ is usually assumed to be nonnegative and nondecreasing, and is often a sigmoid.

As we will see, there is an advantage of lateral coupling in the four-node model for the scrambled *monkey*-*text* experiment. The additional coupling allows the rivalrous solutions with respect to the derived patterns to be asymptotically stable at bifurcation; these solutions are not stable if lateral coupling is excluded.

### 4.2 Calculation of Fusion Equilibria

Diekman et al. (Lemma 3.1 in [34]) state that for every *ρ* there is an $I>0$ and $x>0$ that satisfies (13). Thus, we can assume there is a fusion state for any choice of *w*, *δ*, *β*, *g*, *ε*. We are particularly interested in the case when $\rho <0$.

**Lemma 1**

*Fix*$w,\delta ,\beta ,g,I,{G}_{0}^{\prime}>0$.

*Fix*${x}_{0}>0$

*so that the*$(I-{x}_{0})\rho >0$.

*Then there exists a gain function*$\mathcal{G}(z)$

*satisfying*

*and* ${\mathcal{G}}^{\prime}({z}_{0})={G}_{0}^{\prime}$.

It follows from Lemma 1 that ${x}_{0}$ is a fusion equilibrium and that we can choose ${\mathcal{G}}^{\prime}({x}_{0})>0$ arbitrarily.

*Proof of Lemma 1*The sigmoidal function

satisfies $\mathcal{G}({x}_{0})=a$ and ${\mathcal{G}}^{\prime}({x}_{0})=b$. Set $b={G}_{0}^{\prime}>0$ and *a* equal to the RHS of (14), which is also positive since *ρ* and ${x}_{0}-I$ have the same sign. □

### 4.3 Calculation of Critical Eigenvalues

For Hopf bifurcation to exist, we need one trace to be zero and the corresponding determinant to be positive. For that Hopf bifurcation to be stable, we require all four determinants to be positive and the remaining three traces to be negative.

### 4.4 Stability of Learned Pattern Rivalry

**Proposition 2**

*To have stable Hopf bifurcation to learned pattern rivalry in*(11),

*it is necessary that*

*Sufficient conditions for stable Hopf bifurcation to learned pattern rivalry are given by*(18)

*and*

*Proof*For Hopf bifurcation to learned pattern rivalry to exist, we need $tr({J}^{--})=0$, that is,

*ε*in (17), we obtain the necessary conditions

Note that the necessary conditions (18) follow from directly from (20) and the second condition in (20) follows from the first and third.

Note that the second inequality is always satisfied and, assuming (18), the first and third follow from the fourth. Finally, the fourth inequality follows from (19). □

Note that Hopf bifurcation to stable learned patterns is possible when the lateral coupling is nonexistent; that is, $\delta =0$.

### 4.5 Stability of Derived Pattern Rivalry

**Proposition 3**

*To have stable Hopf bifurcation to learned pattern rivalry in*(11),

*it is necessary that*

*Sufficient conditions for stable Hopf bifurcation to learned pattern rivalry are given by*(22)

*and*

*Proof*For Hopf bifurcation to derived pattern rivalry, we need $tr({J}^{-+})=0$, that is,

*ε*in (17), we obtain the necessary conditions:

Note that the necessary conditions (22) follow directly from (24) and the second condition in (24) follows from the first and third.

To prove the sufficiency part of the lemma, we need to verify that the determinants are all positive. This follows from (16) if the four conditions (21) are satisfied. Note that the second inequality is always satisfied and, assuming (22), the first and fourth follow from the third. Finally, the third inequality follows from (23). □

Note that Hopf bifurcation to stable derived patterns is possible only when the strength of the lateral coupling is larger than the strength of the learned pattern coupling; that is, $\delta >w$.

## 5 Discussion

We have shown that the surprising results in three binocular rivalry experiments described by Kovács et al. [2] can be understood through the use of Wilson-type networks [1] and equivariant Hopf bifurcation theory [5], as interpreted in coupled cell systems [3].

We would like to put our results in a broader context. We showed in Diekman et al. [34] that rivalry between two patterns in Wilson networks collapses to the two-node network in Fig. 2 when the patterns have no attribute levels in common. This reduction uses the notion of a quotient network discussed in [3] and proceeds by identifying equivalent levels in different attribute columns. Let $\mathcal{S}$ denote the subspace obtained in this way [34]. This subspace is flow-invariant for the dynamics; moreover, if one uses the rate models (10) (without lateral coupling), then there are regions in parameter space where the dynamics are attracting to $\mathcal{S}$. We mention this for two reasons. First, bifurcation in directions transverse to $\mathcal{S}$ yields the derived patterns discussed in this paper. For such bifurcations to occur, $\mathcal{S}$ cannot be attracting and this occurs when lateral coupling is present. Second, one can think of the reduction to $\mathcal{S}$ (that is, reduction to the two-node network) as aggregating the information contained in several different attributes into one combined attribute. We believe this is a more general phenomenon with different levels of pattern complexity, as we now describe.

To construct a Wilson network for a given experiment, we must assume which attributes and which levels appropriately define a pattern. For example, in the simplified *colored dot* experiments, we assume that the attributes are the colors of the dots at four geometric locations. On the other hand, in the scrambled *monkey*-*text* experiment, we assume that the attributes are the kind of picture (monkey or text) in two regions of the image rectangle (the blue and the white regions in Fig. 6(a)). One can ask whether these attributes are the reasonable ones to describe patterns in these experiments.

*monkey*-

*text*experiment are the type of image in the six regions labeled A–F in Fig. 13(a). Then we are led to the 12-node network in Fig. 13(b) as a model for this experiment. Such a decomposition is closer in spirit to the geometric decomposition in the

*colored dot*experiments. It is reasonable to ask whether there is a relationship between the networks in Figs. 6(c) and 13(b), and there is. The larger network in Fig. 13(b) has a quotient network on the flow-invariant subspace

*monkey*-

*text*experiment. We believe that there is a general relationship between refined patterns (the addition of extra attribute columns in Wilson networks) and the quotient networks from coupled cell theory [3].

There are two prevalent views about what leads to alternations during binocular rivalry: *eye-based* theories postulate that the two eyes compete for dominance, while *stimulus-based* theories postulate that it is coherent perceptual representations that are in competition (Papathomas et al. [24]). Kovács et al. [2] interpreted their results on interocular grouping (IOG) as evidence against eye-based theories of rivalry.

Lee and Blake [21] reexamine IOG during rivalry, and argue that, whereas IOG rules out models of rivalry in which one eye or the other is completely dominant at any given moment, IOG can be explained by simultaneous dominance of local eye-based regions distributed between the eyes. To demonstrate this, they performed a series of experiments using the Kovács *monkey-text* images and an *eye-swap* technique that exchanges rival images immediately after one becomes dominant (Blake et al. [35]). In their analysis, [21] consider a decomposition of the *monkey-text* images into six regions that is very similar to the decomposition shown in Fig. 13(a). Our mathematical construction, based on Wilson networks and an abstract notion of quotient networks, is not meant to represent V1 or any specific brain area. However, our results support the conclusion of [21] that global IOG (derived patterns) can be achieved by simultaneous local eye dominance.

We end by noting that it should be possible to test our predictions of likely percepts by performing the simplified *colored dot* experiments. We also note that illusions are part of this network theory and they themselves can lead to interesting kinds of perceptual alternations. This topic, as well as symmetry-breaking steady-state bifurcations that lead to various types of winner-take-all states, will be discussed in future work.

## Declarations

### Acknowledgements

The authors thank Randolph Blake, Tyler McMillen, Jon Rubin, Ian Stewart, and Hugh Wilson for helpful discussions. YW thanks the Computational and Applied Mathematics Department of Rice University for its support. This research was supported in part by NSF Grant DMS-1008412 to MG and NSF Grant DMS-0931642 to the Mathematical Biosciences Institute.

## Authors’ Affiliations

## References

- Wilson HR: Requirements for conscious visual processing. In
*Cortical Mechanisms of Vision*. Edited by: Jenkins M, Harris L. Cambridge University Press, Cambridge; 2009:399–417.Google Scholar - Kovács I, Papathomas TV, Yang M, Fehér A: When the brain changes its mind: interocular grouping during binocular rivalry.
*Proc Natl Acad Sci USA*1996, 93: 15508–15511. 10.1073/pnas.93.26.15508View ArticleGoogle Scholar - Golubitsky M, Stewart I: Nonlinear dynamics of networks: the groupoid formalism.
*Bull Am Math Soc*2006, 43: 305–364. 10.1090/S0273-0979-06-01108-6MathSciNetView ArticleMATHGoogle Scholar - Golubitsky M, Stewart I:
*The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space*. Birkhäuser, Basel; 2002.View ArticleGoogle Scholar - Golubitsky M, Stewart I, Schaeffer DG Applied Mathematical Sciences 69. In
*Singularities and Groups in Bifurcation Theory: Volume II*. Springer, New York; 1988.View ArticleGoogle Scholar - Blake R, Logothetis NK: Visual competition.
*Nat Rev, Neurosci*2002, 3: 1–11.View ArticleGoogle Scholar - Your amazing brain [http://www.youramazingbrain.org/supersenses/necker.htm]
- Laing CR, Frewen T, Kevrekidis IG: Reduced models for binocular rivalry.
*J Comput Neurosci*2010, 28: 459–476. 10.1007/s10827-010-0227-6MathSciNetView ArticleGoogle Scholar - Moreno-Bote R, Rinzel J, Rubin N: Noise-induced alternations in an attractor network model of perceptual bistability.
*J Neurophysiol*2007, 98: 1125–1139. 10.1152/jn.00116.2007View ArticleGoogle Scholar - Curtu R: Singular Hopf bifurcations and mixed-mode oscillations in a two-cell inhibitory neural network.
*Physica D*2010, 239: 504–514. 10.1016/j.physd.2009.12.010MathSciNetView ArticleMATHGoogle Scholar - Curtu R, Shpiro A, Rubin N, Rinzel J: Mechanisms for frequency control in neuronal competition models.
*SIAM J Appl Dyn Syst*2008, 7: 609–649. 10.1137/070705842MathSciNetView ArticleMATHGoogle Scholar - Kalarickal GJ, Marshall JA: Neural model of temporal and stochastic properties of binocular rivalry.
*Neurocomputing*2000, 32: 843–853.View ArticleGoogle Scholar - Laing C, Chow C: A spiking neuron model for binocular rivalry.
*J Comput Neurosci*2002, 12: 39–53. 10.1023/A:1014942129705View ArticleGoogle Scholar - Lehky SR: An astable multivibrator model of binocular rivalry.
*Perception*1988, 17: 215–228. 10.1068/p170215View ArticleGoogle Scholar - Matsuoka K: The dynamic model of binocular rivalry.
*Biol Cybern*1984, 49: 201–208. 10.1007/BF00334466View ArticleMATHGoogle Scholar - Mueller TJ: A physiological model of binocular rivalry.
*Vis Neurosci*1990, 4: 63–73. 10.1017/S0952523800002777View ArticleGoogle Scholar - Noest AJ, van Ee R, Nijs MM, van Wezel RJA: Percept-choice sequences driven by interrupted ambiguous stimuli: a low-level neural model.
*J Vis*2007., 7(8): Article ID 10 Article ID 10Google Scholar - Seely J, Chow CC: The role of mutual inhibition in binocular rivalry.
*J Neurophysiol*2011, 106: 2136–2150. 10.1152/jn.00228.2011View ArticleGoogle Scholar - Liu L, Tyler CW, Schor CM: Failure of rivalry at low contrast: evidence of a suprathreshold binocular summation process.
*Vis Res*1992, 32: 1471–1479. 10.1016/0042-6989(92)90203-UView ArticleGoogle Scholar - Shpiro A, Curtu R, Rinzel J, Rubin N: Dynamical characteristics common to neuronal competition models.
*J Neurophysiol*2007, 97: 462–473. 10.1152/jn.00604.2006View ArticleGoogle Scholar - Lee S, Blake R: A fresh look at interocular grouping during binocular rivalry.
*Vis Res*2004, 44: 983–991. 10.1016/j.visres.2003.12.007View ArticleGoogle Scholar - Diaz-Caneja E: Sur l’alternance binoculaire. Ann Ocul 1928, October:721–731. Diaz-Caneja E: Sur l’alternance binoculaire. Ann Ocul 1928, October:721-731.Google Scholar
- Alais D, O’Shea RP, Mesana-Alais C, Wilson IG: On binocular alternation.
*Perception*2000, 29: 1437–1445. 10.1068/p3017View ArticleGoogle Scholar - Papathomas TV, Kovács I, Conway T: Interocular grouping in binocular rivalry: basic attributes and combinations. In
*Binocular Rivalry*. Edited by: Alais D, Blake R. MIT Press, Cambridge; 2005:155–168.Google Scholar - Tong F, Meng M, Blake R: Neural bases of binocular rivalry.
*Trends Cogn Sci*2006, 10: 502–511. 10.1016/j.tics.2006.09.003View ArticleGoogle Scholar - Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC: Geometric visual hallucinations, Euclidean symmetry, and the functional architecture of striate cortex.
*Philos Trans R Soc Lond B, Biol Sci*2001, 356: 299–330. 10.1098/rstb.2000.0769View ArticleGoogle Scholar - Stewart I: Symmetry methods in collisionless many-body problems.
*J Nonlinear Sci*1996, 6: 543–563. 10.1007/BF02434056MathSciNetView ArticleMATHGoogle Scholar - Wilson H: Minimal physiological conditions for binocular rivalry and rivalry memory.
*Vis Res*2007, 47: 2741–2750. 10.1016/j.visres.2007.07.007View ArticleGoogle Scholar - Blasdel GG: Orientation selectivity, preference, and continuity in monkey striate cortex.
*J Neurosci*1992, 12: 3139–3161.Google Scholar - Golubitsky M, Shiau L-J, Torok A: Bifurcation on the visual cortex with weakly anisotropic lateral coupling.
*SIAM J Appl Dyn Syst*2003, 2: 97–143. 10.1137/S1111111102409882MathSciNetView ArticleMATHGoogle Scholar - Dias APS, Rodrigues A:Hopf bifurcation with ${\mathbf{S}}_{N}$-symmetry.
*Nonlinearity*2009, 22: 627–666. 10.1088/0951-7715/22/3/007MathSciNetView ArticleMATHGoogle Scholar - Wilson H: Computational evidence for a rivalry hierarchy in vision.
*Proc Natl Acad Sci USA*2003, 100: 14499–14503. 10.1073/pnas.2333622100View ArticleGoogle Scholar - Wilson H, Blake R, Lee S: Dynamics of traveling waves in visual perception.
*Nature*2001, 412: 907–910. 10.1038/35091066View ArticleGoogle Scholar - Diekman C, Golubitsky M, McMillen T, Wang Y: Reduction and dynamics of a generalized rivalry network with two learned patterns.
*SIAM J Appl Dyn Syst*2012, 11: 1270–1309. 10.1137/110858392MathSciNetView ArticleMATHGoogle Scholar - Blake R, Yu K, Lokey M, Norman H: What is suppressed during binocular rivalry?
*Perception*1980, 9: 223–231. 10.1068/p090223View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.