 Research
 Open Access
 Published:
Monochromaticity of Orientation Maps in V1 Implies Minimum Variance for Hypercolumn Size
The Journal of Mathematical Neuroscience (JMN) volume 5, Article number: 10 (2015)
Abstract
In the primary visual cortex of many mammals, the processing of sensory information involves recognizing stimuli orientations. The repartition of preferred orientations of neurons in some areas is remarkable: a repetitive, nonperiodic, layout. This repetitive pattern is understood to be fundamental for basic nonlocal aspects of vision, like the perception of contours, but important questions remain about its development and function. We focus here on Gaussian Random Fields, which provide a good description of the initial stage of orientation map development and, in spite of shortcomings we will recall, a computable framework for discussing general principles underlying the geometry of mature maps. We discuss the relationship between the notion of column spacing and the structure of correlation spectra; we prove formulas for the mean value and variance of column spacing, and we use numerical analysis of exact analytic formulae to study the variance. Referring to studies by Wolf, Geisel, Kaschube, Schnabel, and coworkers, we also show that spectral thinness is not an essential ingredient to obtain a pinwheel density of π, whereas it appears as a signature of Euclidean symmetry. The minimum variance property associated to thin spectra could be useful for information processing, provide optimal modularity for V1 hypercolumns, and be a first step toward a mathematical definition of hypercolumns. A measurement of this property in real maps is in principle possible, and comparison with the results in our paper could help establish the role of our minimum variance hypothesis in the development process.
Introduction
Neurons in the primary visual cortex (V1, V2) of mammals have stronger responses to stimuli that have a specific orientation [1–3]. In many species including primates and carnivores (but no rodent, even though some of them have rather elaborated vision [4, 5]), these orientation preferences are arranged in an ordered map along the cortical surface. Moving orthogonally to the cortical surface, one meets neurons with the same orientation preference; traveling along the cortical surface, however, reveals a striking arrangement in smooth, quasiperiodic maps, with singular points known as pinwheels where all orientations are present [6–8]; see Fig. 1. All these orientation maps look similar, even in distantly related species [5, 9]; the main difference between any two orientation preference maps (OPM) seems to be a matter of global scaling.
The common design has very precise and beautiful geometrical properties, and universal quantitative properties of these cortical maps have recently been uncovered: for instance, a density of singular points close to π has been observed [9, 12]; see below. However, the exact functional advantage of this geometrical arrangement in hypercolumns remains unclear [5, 11, 13–16]. What is more, the functional principles underlying the observed properties of orientation maps are still in debate; in particular, it is often thought that a pinwheel density of π has to do with monochromaticity (existence of a critical wavelength in the correlation spectrum) of the cortical map. The aim of this short paper is to clarify the role of the monochromaticity, or spectral thinness, condition, using the simplified mathematical framework of Gaussian Random Fields with symmetry properties. Our first few remarks (Sect. 2.1) are included for clarification purposes: we first give an intrinsic definition of the column spacing in these fields, then discuss the intervention of spectral thinness in theoretical and experimental results related to pinwheel densities. Then (Sect. 2.2) we introduce the “minimum variance” property in our title, to help discuss the quasiperiodicity in the map and to try to understand better the notion of cortical hypercolumn. In the concluding Discussion (Sect. 3), we also try to clarify the relevance of this property for the development of real maps and formulate a simple test for our hypothesis that it is indeed relevant.
Many models for the development of orientation maps have been put forward [5, 17–19]; they address such important issues as the role of selforganization, or of interactions between orientation and other parameters of the receptive profiles [14, 19–22]. In this short note, we focus on a mathematical computable framework in which geometrical properties can be discussed with full proofs, and whose quantitative properties can now be compared with those of experimental maps. While we thus put the focus on the geometry of theoretical maps rather than on the most realistic developmental scenarios, we try to relate this geometry to organizing principles, viz. information maximization and perceptual invariance, which are relevant for discussing real maps. In a mathematical setting, these principles can be enforced through explicit randomness and invariance structures.
Wolf, Geisel, Kaschube and coworkers [9, 23–25] have described a wide class of probabilistic models for the development of orientation preference maps. In all these models (and in our discussion) the cortical surface is identified with the plane \(\mathbb {R}^{2}\), and the orientation preference of neurons at a point x is given by (half) the argument of a complex number \(\mathbf{z}(x)\); one adds the important requirement that the map \(x \mapsto\mathbf{z}(x)\) be continuous (this is realistic enough if the modulus \(z(x)\) stands for something like the orientation selectivity of the neurons at x; see [7, 11, 26]). Pinwheel centers thus correspond to zeroes of z.
A starting point for describing orientation maps in these models, one which we will retain in this note, is the following general principle: we should treat z as a random field, so at each point x, the complex number \(\mathbf{z}(x)\) as a random variable.
Even without considering development, it is reasonable to introduce randomness, to take into account interindividual variability. But of the statistical properties of zeroset of general random fields, our understanding is that presentday mathematics can say very little [27]; only for very specific subclasses of random fields are precise mathematical theorems available. The most important of those is the class of Gaussian Random fields [27–29]—a random field z is Gaussian when all joint laws for \((\mathbf{z}(x_{1}), \ldots ,\mathbf {z}(x_{n})) \in \mathbb {C}^{n}\) are Gaussian random variables.
If the map z arises from an unknown initial state and if the development features a stochastic differential equation, taking into account activitydependent fluctuations and noise, the Gaussian hypothesis is very natural for the early stages of visual map development (see [23, 25, 30]). In the most precise and recent development models by Wolf, Geisel, Kaschube, and others [9, 14, 31], it is, however, only the initial stage that turns out to be well represented by a Gaussian field: upon introducing longrange interactions in the integral kernel of the stochastic differential equation representing the refinement of cortical circuitry, the Gaussian character of the field must be assumed to break down when the nonlinearities become significant, and the stationary states of the dynamics which represent mature maps cannot be expected to be Gaussian states. We shall comment on this briefly in Sect. 2.1.3 and come back to it in the Discussion (Sect. 3).
In spite of this, we shall stick to the geometry of maps sampled from Gaussian Random Fields (GRFs) in our short paper. We have several reasons for doing so. A first remark is that a better understanding of maps sampled from them can be helpful in understanding the general principles underlying more realistic models, or helpful in suggesting some such principles. A second remark is that with the naked eye, it is difficult to see any difference between some maps sampled from GRFs and actual visual maps (see Fig. 2), and that there is a striking likeness between some theorems on GRFs and some properties measured in V1. A third is that precise mathematical results on GRFs can be used for testing how close this likeness is, and to make the relationship between GRFs and mature V1 maps clearer.
Wolf and Geisel add a requirement of Euclidean invariance on their stochastic differential equation, so that if the samples from a GRF are to be thought of as providing (early or mature) cortical maps, the field should be homogeneous (i.e. insensitive, as a random field, to a global translation \(x \mapsto x + a\)), isotropic (insensitive to a global rotation, \(x \mapsto \bigl({\scriptsize\begin{matrix} \cos(\alpha) & \sin(\alpha) \cr \sin(\alpha) & \cos(\alpha)\end{matrix}} \bigr) x\)) and centered (insensitive to a global shift of all orientation preferences, changing the value \(\mathbf{z}(x)\) at each x to \(e^{i\theta}\mathbf{z}(x)\)). Here again, looking at mature maps, geometrical invariance is a natural requirement for perceptual function; so we shall assume that the GRF z is centered, homogeneous, and isotropic [27, 32]. Note that of course, this invariance requirement cannot be formulated in a nonprobabilistic setting (a deterministic map from \(\mathbb {R}^{2}\) to ℂ cannot be homogeneous without being constant).
It actually turns out that these two mathematical constraints (Gaussian field statistics and symmetry properties) are strong enough to generate realisticlooking maps, with global quasiperiodicity. Quite strikingly, it has been observed [30, 33] that one needs only add a spectral thinness condition to obtain maps that seem to have the right qualitative (a hyper columnar, quasiperiodic organization) and quantitative properties (a value of π for pinwheel density). These mathematical features stand out among theoretical models for orientation maps as producing a nice quasiperiodicity, with roughly repetitive “hypercolumns” of about the same size that have the same structure, as opposed to a strictly periodic crystallike arrangement (see [14, 21], compare [34, 35]). The aim of this short note is to clarify the importance of this spectral thinness condition for getting a quasiperiodic “hypercolumnar” arrangement on the one hand, a pinwheel density of π on the other.
Before we give results about homogeneous and isotropic GRFs, let us mention that the quantitative properties of the common design which have been observed by Kaschube et al. [9] also include mean values for three kinds of nearest neighbour distance and for two parameters representing the variability of the pinwheel density as a function of subregion size; evaluating these mean values in the mathematical setting of random fields, even in the oversimplified case of invariant GRFs, is a difficult mathematical problem which is beyond the author’s strengths at present. So in this short note, we shall focus on the existence of a precise hypercolumn size and a welldefined pinwheel density in the common design, and refrain from examining the other important statistics.
Results
Two Remarks on Gaussian Random Fields with Euclidean Symmetry
Preliminaries on Correlation Spectra
Let us first formulate the spectral thinness condition more precisely: in an invariant GRF, the correlation \(C(x,y)\) between orientations at x and y depends only on \(\Vert xy\Vert \). Let us turn to its Fourier transform, or rather to the Fourier components of the map \(\varGamma\,\colon \mathbb {R}^{2} \rightarrow \mathbb {C}\) such that \(C(x,y) = \varGamma(x y)\). For an invariant Gaussian field, specifying Γ does determine the field; what is more, there is a unique measure P on \(\mathbb {R}^{+}\) such that
where, for fixed \(R>0\), the map \(\varGamma_{R}\) is^{Footnote 1} \(\tau\mapsto\int_{\mathbb {S}^{1}} e^{i{R}\vec{u} \cdot\tau}\, d\vec{u}\).
Now, correlations on real cortical maps can be measured and the spectrum of Γ can be inferred [30]; data obtained by optical imaging reveals that the spectral measure P is concentrated on an annulus ([30, p. 100], see also [33]): this means that there is a dominant wavelength \(\varLambda_{0}\), such that the measure P concentrates around \(R_{0} = \frac{2\pi}{\varLambda_{0}}\).
Correlation spectra of real V1 maps, first discussed in [33], have been measured precisely by Schnabel in tree shrews [30] (see [30, p. 104], Fig. 5.6(d) is reproduced in Fig. 2 below). The spectral measure P has a nicely peaked shape, and the very clear location of the peak is used as the dominant wavelength Λ; see Fig. 2. From Schnabel’s data we evaluate the standard deviation in P to be about 0.2Λ (caution: here P is a real correlation spectrum, not the spectral density of a GRF).
Although this is far from being an infinitely thin spectrum, it is not absurd to look at the extreme situation where we impose the spectral thinness to be zero. Figure 3 shows a map sampled from a monochromatic invariant GRF, in which Γ is one of the maps \(\varGamma_{R}\) of the previous paragraph, in other words the inverse Fourier transform of the Dirac distribution \(\delta(R R_{0})\) on a circle: monochromatic (or almost monochromatic) invariant GRFs yield quite realisticlooking maps, at least to the naked eye.
This thinness hypothesis certainly has to do with the existence of a precise scale in the map, that is, with the “hyper columnar” organization. In all existing theoretical studies that we know of, spectral thinness is introduced a priori into the equations precisely in order to obtain a repetitive pattern in the model orientation maps. For instance, in the very successful longrange interaction model of Wolf et al. [9, 31], the linear part of the stochastic differential equation for map development features a Swift–Hohenberg operator in which a characteristic wavelength is imposed. The “typical spacing” between isoorientation domains is then defined as that which corresponds to the mean wavenumber in the power spectrum:
Mean Column Spacing in Invariant Gaussian Fields
It is reasonable, both intuitively and practically, to expect that \(\varLambda_{\mathrm{mean}}\) gives the mean local period between isoorientation domains. For reasonable bellshaped power spectra, \(\varLambda_{\mathrm{mean}}\) is in addition quite close to the location of the peak in the spectrum, which very obviously corresponds to the “dominant frequency” in the power spectrum and is quite straightforward to measure. But from a mathematical point of view, there is a paradox here.
For Gaussian fields, it is natural to try to clarify this and write down an intrinsic definition of the mean column spacing in terms of the probabilistic structure of the field. The paradox is that then the natural scale to use turns out to be different from \(\varLambda_{\mathrm{mean}}\), and the difference is appreciable in measured spectra. We are going to show presently that in an invariant Gaussian random field, the typical spacing turns out to be the wavelength \(\varLambda_{\mathrm{sq}}\) corresponding to the quadratic mean wavenumber:
which coincides with \(\varLambda_{\mathrm{mean}}\) if and only if the field is monochromatic.
Using Schnabel’s data to evaluate the corresponding wavelengths in real maps, we find that the quotient between \(\varLambda _{\mathrm{sq}}\) and \(\varLambda_{\mathrm{mean}}\) is about 1.1, and they are within 15 % of each other. So using one rather than the other does have an importance.
To justify our claim that \(\varLambda_{\mathrm{sq}}\) is a good intrinsic way to define the column spacing in an invariant Gaussian field, let us consider a fixed value of orientation, say the vertical. Let us draw any line D on the plane and look for places on D where this orientation is represented, which means that the real part of z vanishes. Now if z is an Euclideaninvariant standard Gaussian field, \(\mathfrak{Re}(\mathbf{z})_{D}\) is a translationinvariant Gaussian field on the real line D. From the celebrated formula of Kac and Rice we can then deduce the typical spacing between its zeroes, and this yields the following theorem.
Result 1
Pick any line segment J of length ℓ on the plane and any orientation \(\theta_{0} \in \mathbb {S}^{1}\). Write \(\mathcal {N}_{J,\theta_{0}}\) for the random variable recording the number of points on J where the Gaussian field z provides an orientation \(\theta_{0}\). Then
Indeed, let us write Φ for \(\mathfrak{Re}(\mathbf{z})_{D}\), viewed as a stationary Gaussian field on the real line, G for its covariance function, and \(\mathcal{G}\) for the covariance function of \(\mathfrak{Re}(\mathbf{z})\) viewed as a homogeneous and isotropic random field on \(\mathbb {R}^{2}\). The arguments leading up to the statement of Result 1 and the Kac–Rice formula which is recalled in the Appendix prove that \(\mathbb{E}[ \mathcal{N}_{J,\theta_{0}} ] = \ell\cdot\frac{\sqrt {\lambda} }{\pi}\), where \(\lambda= \mathbb{E} [\varPhi'(0)^{2} ]\). But \(\mathbb{E} [\varPhi'(0)^{2} ] = \partial _{x_{1}} \partial_{x_{2}} \mathbb{E} [ \varPhi(x_{1}) \varPhi(x_{2}) ] _{x_{1} = x_{2} = 0}\), and this is \(\partial_{x} \partial_{y} G(xy) _{x=y=0} = G''(0)\). To complete the proof we need to calculate this.
Now, in view of the Euclidean invariance of \(\mathfrak {Re}(\mathbf{z})\), we know that \(G''(0)\) is half the value of \(\Delta \mathcal{G}\) at zero. To evaluate this quantity, we use the spectral decomposition of \(\mathcal{G}\): it reads \(\mathcal{G} = \int_{R>0} \mathcal{G}_{R} \,dP(R)\), where \(\mathcal{G}_{R}\) is the covariance function of a realvalued monochromatic invariant field on \(\mathbb {R}^{2}\), hence is equal to \(\frac{1}{2}\varGamma_{R}\) (recall that \(\varGamma_{R}\) was defined in Eq. (1), and is realvalued). Now, \(\varGamma_{R}\) satisfies the Helmholtz equation \(\Delta( \varGamma_{R}) = R^{2} \varGamma _{R}\), and in addition \(\varGamma_{R}(0)\) is equal to 1, so \(\mathcal {G}_{R}(0)\) is equal to \(1/2\). We conclude that \(G''(0)\) is equal to \( \frac{1}{4} \int_{R>0} R^{2}\,dP(R) = \frac{\pi ^{2}}{\varLambda_{\mathrm{sq}}^{2}}\). This completes the proof of Result 1.
Let us now comment on this result. It means that repetitions of \(\theta_{0}\) occur in the mean every \(\varLambda_{\mathrm{sq}}\). Of course this is very close to \(\varLambda_{\mathrm{mean}}\) when the support of the power spectrum is contained in a thin enough annulus (if the width of such an annulus is less than a fifth of its radius, \(\varLambda _{\mathrm{mean}}\) and \(\varLambda_{\mathrm{sq}}\) are within 3 % of each other). But in general, it is obvious from Jensen’s inequality that \(\varLambda_{\mathrm{mean}} \geq\varLambda_{\mathrm{sq}}\), with equality if and only if the field is monochromatic. In real maps, there is an appreciable difference between \(\varLambda_{\mathrm{mean}}\) and \(\varLambda_{\mathrm{sq}}\) as we saw.
Pinwheel Densities in Gaussian Fields and Real Maps
Let us turn now to pinwheel densities; we would like to comment on a beautiful theoretical finding by Wolf and Geisel and related experimental findings by Kaschube, Schnabel and others. We feel we should be very clear here and insist that this subsection is a comment on work by Wolf, Geisel, Kaschube, Schnabel and others; if we include the upcoming discussion it is to clarify the role of the spectral thinness condition in the proof of their result, and we seize the opportunity to comment on this work’s theoretical significance.
If a wavelength Λ is fixed, the pinwheel density \(d_{\varLambda }\) in a (real or theoretical) map is the mean number of singularities in an area \(\varLambda^{2}\). In the experimental studies of Kaschube et al. [9] and Schnabel [30], the wavelength used is obtained with two algorithms, one which localizes the maximum in the power spectrum, and one which averages local periods obtained by wavelet analysis. These two algorithms give approximately the same result, say \(\varLambda_{\mathrm{exp}}\), and pinwheel densities are scaled relatively to this \(\varLambda_{\mathrm{exp}}\): a very striking experimental result is obtained by Kaschube’s group, namely
On the other hand, in an invariant Gaussian random field, expectations for pinwheel densities may be calculated using generalizations of the formula of Kac and Rice. This calculation has been conducted by Wolf and Geisel [23, 25], Berry and Dennis [36]; recent progress on the mathematical formulation of the Kac–Rice formula makes it possible to write down new proofs [29, 37], as we shall see presently. The value of π occurs very encouragingly here, too:
Theorem
(Wolf and Geisel [25], Berry and Dennis [36]; see also [29, 37])
Let us write \(\mathcal {P}_{\mathcal{A}}\) for the random variable recording the number of zeroes of the Gaussian field z in a region \(\mathcal{A}\), and \(\mathcal{A}\) for the Euclidean area of \(\mathcal{A}\). Then
We think it can be of interest for readers of this journal that we include a proof of this result here. We would like to say very clearly that the discovery of this result is due to Wolf and Geisel on the one hand, and independently to Berry and Dennis in the monochromatic case. In [29], Azaïs and Wschebor gave a mathematically complete statement of a Kac–Ricetype formula, and recently Azaïs, Wschebor and León used it (following Berry and Dennis) to give a mathematically complete proof of the above theorem, though they wrote down the details only in case z is monochromatic [37]. It is for the reader’s convenience and because the focus of this short note is with nonmonochromatic fields that we recall their arguments here.
Azaïs and Wschebor’s theorem (Theorem 6.2 in [29]), in the particular case of a smooth reduced Gaussian field, is the following equality:
Here the integral is with respect to Lebesgue measure on \(\mathbb {R}^{2}\), and the integrand is a conditional expectation. To evaluate this, one should first note that z has constant variance, and an immediate consequence is that for each p, the random variable \(\mathbf{z}(p)\) is independent from the random variable recording the value of the derivative of the real part (resp. the imaginary part) of z at p. So the random variables \(\operatorname{det} d\mathbf {z}(p) \) and \(\mathbf{z}(p)\) are actually independent at each p, and we can remove the conditioning in the formula. Now at each p, \(d\mathbf{z}(p)\) is a \(2\times2\) matrix whose columns, \(C_{1}(p):= \bigl({\scriptsize\begin{matrix} (\partial_{x} \mathfrak{Re} (\mathbf{z}))(p) \cr (\partial_{y} \mathfrak{Re}(\mathbf{z}))(p) \end{matrix}}\bigr) \) and \(C_{2}(p):= \bigl({\scriptsize\begin{matrix} (\partial_{x} \mathfrak{Im}(\mathbf{z}))(p) \cr (\partial_{y} \mathfrak{Im}(\mathbf{z}))(p) \end{matrix}}\bigr) \), are independent Gaussian vectors (see [27, Sect. 1.4 and Chap. 5]). Because z has Euclidean symmetry, \(C_{1}(p)\) and \(C_{2}(p)\) have zero mean and the same variance, say \(V_{p}\), as \((\partial_{x} \mathfrak{Re}(\mathbf{z}))(p)\). But \( \operatorname{det} d\mathbf{z}(p)\) is the area of the parallelogram generated by \(C_{1}(p)\) and \(C_{2}(p)\), and the “base times height” formula says this area is the product of \(\Vert C_{1}(p)\Vert \) with the norm of the projection of \(C_{2}(p)\) on the line orthogonal to \(C_{1}(p)\). The expectation of \(\Vert C_{1}(p)\Vert \), a “chisquare” random variable, is \(2\sqrt{V_{p}}\) and that of the norm of the projection of \(C_{2}(p)\) on any fixed line is \(\sqrt{V_{p}}\); since both columns are independent, we can conclude that
(the last equality is because z and all its derivatives are stationary fields). Now we need to evaluate \(V_{0} = \mathbb{E} \{ (\partial_{x} \mathfrak{Re} \mathbf{z})(0)^{2} \}\). But this quantity already appeared in the proof of Result 1, it was labeled λ there. So we already proved that it is equal to \(\frac{\pi^{2}}{\varLambda_{\mathrm{sq}}^{2}}\), and this concludes the proof of Wolf and Geisel’s theorem.
From this theorem Wolf, Geisel, and others deduce that \(d_{\varLambda_{\mathrm{mean}}} \geq\pi\), and it is in this form that the theorem is discussed. However, we have seen that \(d_{\varLambda _{\mathrm{sq}}}\), which is equal to π whatever the spectrum, is a rather more natural theoretical counterpart to \(d_{\varLambda_{\mathrm{exp}}}\). If we drop the focus away from \(\varLambda_{\mathrm{mean}}\) to bring \(\varLambda_{\mathrm{sq}}\) to the front, we obtain from Result 1 the following reformulation of Wolf and Geisel’s theorem.
Result 2
Write Δ for the typical distance between isoorientation domains, as expressed by Result 1, and η for the value \(\frac{\mathbb{E}[ \mathcal{P}_{\mathcal{A}} ]}{\mathcal {A}}\) of the pinwheel density. Then
There are two simple consequences of Wolf and Geisel’s finding which we would like to bring to our reader’s attention.
The first is that the pinwheel density of π observed in experiments is scaled with respect to \(\varLambda_{\mathrm{exp}}\), and not with respect to \(\varLambda_{\mathrm{sq}}\). Using Schnabel’s data, we can evaluate the \(d_{\varLambda_{\mathrm{sq}}}\) of real maps, and as \(\varLambda_{\mathrm{sq}}\) is about \(0.82\varLambda_{\mathrm{exp}}\) in Schnabel’s data, \(d_{\varLambda_{\mathrm{sq}}}\) strongly departs from π in real maps. Since it would be exactly π in maps sampled from GRFs, one consequence of the work in [9, 25, 30] is the following.
Corollary
The pinwheel density of observed mature maps is actually incompatible with that of maps sampled from invariant Gaussian Fields.
This fact is quite apparent in the work by Wolf, Geisel, Kaschube, and coworkers, but since we focused on GRFs in this short note we felt it was useful to recall this as clearly as possible.
Our second remark is that in the reformulation stated as Result 2 here, there is no longer any spectral thinness condition. In other words, when we consider maps sampled from Gaussian Random Fields, a pinwheel density of π is a numerical signature of the fact that the field has Euclidean symmetry. Result 2 thus shows that when one considers invariant GRFs, average pinwheel density and monochromaticity are independent features.
Because invariant GRFs have ergodicity properties, an ensemble average such as that in Result 2 can be evaluated on an individual sample map; one can thus consider a single output of the GRF z and proceed to quantitative measurements on it to determine whether the probability distribution of z has Euclidean symmetry. Very remarkable, since no single output can have Euclidean symmetry!
To conclude this subsection, let us recall that Results 1 and 2 say nothing of map ensembles that do not have Gaussian statistics, and in particular nothing of the geometry of real maps; they certainly do not mean that the definition of \(\varLambda_{\mathrm{exp}}\) used in experiments is faulty, but were simply aimed at disentangling monochromaticity from other geometrical principles in the simplified setting of GRFs. To illustrate the fact that our results are not incompatible with the definition of \(\varLambda_{\mathrm{exp}}\) used in experiments, let us note that of the two methods used by Kaschube et al. to determine \(\varLambda_{\mathrm{exp}}\), one (the averaging of local waveletevaluated spacings) provides a definition of column spacing similar to that which we used in Result 1, and the other (looking for the peak in the power spectrum) gives an appreciably different result from \(\varLambda_{\mathrm{sq}}\) as we recalled. The fact that Kaschube et al. observe the two algorithms to give very close results in real maps does not go against Result 1, but rather can be seen as another argument, this time Result 1based, against GRFs representing mature maps. The measurement of the pinwheel density, Eq. (4), furthermore indicates that development seems to keep Result 2 true at the mature stage. We shall come back to this in the Discussion.
The Variance of Column Spacings
Results 1–2 show that for Gaussian Random Fields, the existence of a pinwheel density of π is independent of the monochromaticity condition. We evaluated the expected value of the column spacing in an invariant GRF in Result 1, and we now turn to its variance. There are several reasons why it should be interesting to establish rigorously that spectral thinness provides a low variance.
A first one is the search for a mathematically welldefined counterpart to the statement, visually obvious, that orientation maps are “quasiperiodic”. Most mathematical definitions of quasiperiodicity (like those which follow Harald Bohr [38]) are not very well suited to discussing V1 maps, and we feel that the meaning of the word is, in the case of V1 maps, well conveyed by the property we will demonstrate. While it is intuitively obvious that a “nice quasiperiodicity” should come with spectral thinness, as we shall see it is mathematically nontrivial.
A second reason to look at the variance is to try to understand better the concept of “cortical hypercolumn”, due to Hubel and Wiesel, which is crucial to discussions of the functional architecture of V1. Neurons in V1 are sensitive to a variety of local features of the visual scene, and a hypercolumn gathers neurons whose receptive profiles span the possible local features (note that there is no welldefined division of V1 in hypercolumns, but an infinity of possible partitionings). In studies related to the local geometry of V1 maps, once a definition for the column spacing Λ has been chosen, one is led (as in [9, 24, 39]) to define the area of a hypercolumn as \(\varLambda^{2}\). Here we put the focus on the orientation map only; but even then is thus legitimate to wonder whether in a domain of area \(\varLambda^{2}\), each orientation is represented at least once. Note that a value of π for the pinwheel density can guarantee this if one establishes that the density also has a small variance; here, however, we are not going to evaluate this variance, which is possible in principle [37] but not easy, and we simply focus on column spacing. This is a first step in trying to check that the internal structure of domains with area \(\varLambda^{2}\) is somewhat constant, as suggested by the available results on pinwheel density.
Let us add that from the point of view of information processing, it is not unnatural to expect a low variance for hypercolumn size. It is known that the behavior of many neurons in the central nervous system depends on the statistical properties in the distributions of spikes reaching them, and not merely on the average activity. These statistical characteristics depend on physiology of course, but also on the information being vehicled. Now, vision is an active process; the eye moves ceaselessly and a given object or contour is processed by many regions of V1 in a relatively short time. For a neuron receiving inputs from V1, a low variance for hypercolumn size should help make the distribution of received information more uniform (with minimum bias for a given orientation). This would be in harmony with a general principle at work in the central nervous system, that of maximizing mutual information, which on the sensory side corresponds to a maximum of discrimination (and Fisher information; see [40]) and on the motor side to what has been called the “minimum variance principle”, for instance in the study of ocular saccades or arm movements [41].
So we will now consider the variance \(\mathbb{V} [ \mathcal{N}_{J,\theta_{0}}]\) of the previous random variable. We will show that it reaches a minimum when the spectrum is a pure circle. Now, evaluating this variance is surprisingly difficult, even though there is an explicit formula, namely the following.
Theorem
(Cramer and Leadbetter; see [42])
In the setting of Result 1, write \(G\,\colon \mathbb {R}\rightarrow \mathbb {R}\) for the covariance function of \(\mathfrak{Re}(\mathbf{z})_{D}\) and \(M_{33}(\tau)\), \(M_{44}(\tau)\) the cofactors of the \((3,3)\) and \((3,4)\) entries in the matrix
Then
Recall here that
this \(G(\tau)\) is an oscillatory integral which involves Bessellike functions with different parameters, and the formula for \(\mathbb{V}[ \mathcal{N}_{J,\theta_{0}} ]\) features quite complicated expressions using the first and second derivatives of this integral, with a global integration on top of this; so any analytical understanding of formula (6) seems out of reach! But we can check numerically that it does attest to monochromatic fields having minimum variance.
We used Mathematica to evaluate variances of invariant GRFs, using the formulae in the theorem of Cramer and Leadbetter’s. This needed some care: to evaluate \(\mathbb{V}[ \mathcal{N}_{J,\theta_{0}}] \), we had to perform numerical integration on an expression involving derivatives of the correlation function G, itself a parameterdependent integral which cannot be reduced to simpler functions of the parameter. This kind of numerical evaluation is rather delicate to perform precisely, especially if there are oscillations in the integral as is the case here—the result can then be very highly dependent on the sampling strategy—and if there are multiple operations to be performed on the outputs of these integrals—the calculations of derivatives and second derivatives of the numerically evaluated G, and the multiple divisions, might propagate the errors quite erratically.
In order to keep the numerical errors from masking the “exact” effect of thickening the spectrum, we forced the software to optimize its calculation strategy (adaptive MonteCarlo integration), detecting oscillations in the integrand and adapting the sampling requirements, and we extended evaluation time beyond the usual limits (by dropping the inbuilt restrictions on the recursion depths). When the difference between successive evaluations was tamed, this yielded the variance curve displayed on Fig. 4.
Note that the drawn variances correspond to fields with very slightly different spacings \(\varLambda_{\mathrm{sq}}\); however, it is easy to check numerically that for every spectrum considered here, the variance of a monochromatic field with wavelength \(\varLambda_{\mathrm{sq}}\) is inferior to the variance drawn on Fig. 4.
Numerical evaluations also show that at a fixed spectral width, using few circles to build the field (i.e. introducing several characteristic wavelengths in the map) leads to a higher variance than simulating a uniform spectral distribution. To see this, we first evaluated \(\mathbb{V}[ \mathcal{N}_{J,\theta_{0}}] \) for an invariant GRF whose spectrum gathered three circles of radii \(R_{\mathrm{inf}}\), \(R_{\mathrm{sup}}\) and \(R_{\mathrm{mean}} = 10.95\) in the fixed arbitrary unit, then spanned the interval between \(R_{\mathrm{inf}}\) and \(R_{\mathrm{sup}}\) with more and more circles, using spectra with \(2N+1\) circles of radii \(R_{\mathrm{mean}} + 0.95\frac{i}{N}\). We observed \(\mathbb{V}[ \mathcal{N}_{J,\theta_{0}}] \) to decrease with N in that case, and the existence of a limit value. From Riemann’s definition of the integral, we see that this value is that which corresponds to a spectrum uniformly distributed in the annulus delimited by \(R_{\mathrm{inf}}\) and \(R_{\mathrm{sup}}\). To keep the evaluation time reasonable (it is roughly quadratic in N), we kept the value \(N = 18\) for the evaluations whose results are shown on Fig. 3, and which are close to the observed limit values. We should also add here that we observed higher values for variance when using smooth spectra with several dominant wavelengths.
This is another argument for monochromaticity yielding minimum variance. Since the space of possible spectra with a fixed support is infinitedimensional, our numerical experiments cannot explore it all. But we feel justified in stating the following numerical results on quasiperiodicity in orientation maps sampled from invariant Gaussian fields.
Result 3

(i)
For uniform spectra, variance increases with the width of the supporting annulus.

(ii)
For a given spectral width, dominance of a single wavelength seems to yield minimum variance. Introducing more than one critical wavelength in the spectrum systematically increases nonuniformity in the typical size of hypercolumns.
Result 3 proves that sharp dominance of a single wavelength is the best way to obtain minimum variance. What is more, the horizontal slope at zero in Fig. 3 means that fields which are close to monochromatic have much the same quasiperiodicity properties as monochromatic invariant fields. This is quite welcome in view of Schnabel’s results: of course we cannot expect actual monochromaticity in real OPMs, but clear dominance of a wavelength is much more reasonable biologically. A more theoretical benefit is the flexibility of invariant GRFs for modeling: a modeladapted precise formula for the power spectrum may be inserted without damage to the global, robust resemblance between the predicted OPMs and real maps [5].
These observations reinforce the hypothesis that our three informational principles (randomness structure, invariance, spectral thinness) are sufficient to reproduce quantitative observable features of real maps, though as we saw, using an invariant GRF with the most realistic spectrum does not necessarily yield a more realistic result than using a monochromatic GRF, and leads to incompatibilities with the observed mature maps. This form of universality is certainly welcome: individual maps in different animals, from different species (with different developmental scenarii) necessarily have different spectra, but general organizing principles can be enough to explain even quantitative observed properties.
Discussion
In this short note we recalled that simple hypotheses on randomness, invariance, and spectral width of model orientation maps reproduce important geometrical features of real maps. Though it should not be forgotten that we worked in a simplified mathematical framework which reproduces only some aspects of the common design and whose dissemblance with real maps can be established rigorously as we recalled, we feel two new points deserve special attention: first, we showed that in the simplified setting of Gaussian Random Fields, the best mathematical quantity for explaining the local quasiperiod is the quadratic mean wavenumber rather than the mean wavenumber, and pointed out that a pinwheel density of π, when scaled with respect to this intrinsic column spacing, is a signature of Euclidean symmetry and not of Euclidean symmetry plus spectral thinness; second, we established (through numerical analysis of an exact formula) that the variability of local quasiperiods is minimized when the standard deviation of the spectral wavelength tends to zero.
Our analysis shows that at least in the setting of Gaussian fields, realistically large spectra are compatible with a low variance; we suggest that a low variance for column spacing might be observed in real data, and perhaps also a low variance for the number of pinwheels in an area \(\varLambda_{\mathrm{exp}}^{2}\). Spectral thinness is usually attributed to biological hardware in the cortex (like presight propagation wavelengths in the retina or thalamus [43, 44]); this turns out to be compatible with some form of optimality in information processing.
It would also be very interesting to compare the variance of column spacings in real maps (in units of the spacing evaluated by averaging local periods) with the smallest possible value for GRFs, observed in this paper for monochromatic fields (see Fig. 4); if a lower value for variance in real maps than in monochromatic Gaussian fields is found, it would mean that cortical circuitry refinement, featuring longrange interactions, brings mature maps closer to a geometrical homogeneity of hypercolumns. This would also throw some light on the fact that as development proceeds and the probability distribution of the field turns away from that of a GRF, driven by activitydependent shaping, the column spacing obtained by averaging local periods seems to come closer to the wavelength associated to the mean or peak wavenumber (see [9, supplementary material, p. 5]) than it is in GRFs. It is then remarkable that development should maintain the value of π for pinwheel density when scaled with respect to the current value of column spacing, keeping Result 2 valid over time (of course the density seems to move if one does not change the definition of column spacing over time, but the bestsuited quantity for measuring column spacing seems to change). Perhaps this also has a benefit for areas receiving inputs from V1, keeping their tuning with the pinwheel subsystem (which seems to have an independent interest for information processing; see [45]) stable.
Notes
 1.
The measure on \(\mathbb {S}^{1}\) used in this formula has total mass one.
References
 1.
Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160(1):106–54.
 2.
Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat’s striate cortex. J Physiol. 1959;148(3):574–91.
 3.
Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol. 1968;195(1):215–43.
 4.
Van Hooser SD, Heimel JA, Nelson SB. Functional cell classes and functional architecture in the early visual system of a highly visual rodent. Prog Brain Res. 2005;149:127–45.
 5.
Kaschube M. Neural maps versus saltandpepper organization in visual cortex. Curr Opin Neurobiol. 2014;24:95–102.
 6.
Bonhoeffer T, Grinvald A. The layout of isoorientation domains in area 18 of cat visual cortex: optical imaging reveals a pinwheellike organization. J Neurosci. 1993;13:4157–80.
 7.
Bonhoeffer T, Grinvald A. Isoorientation domains in cat visual cortex are arranged in pinwheellike patterns. Nature. 1991;353(6343):429–31.
 8.
Ohki K, Chung S, Kara P, Hubener M, Bonhoeffer T, Reid RC. Highly ordered arrangement of single neurons in orientation pinwheels. Nature. 2006;442(7105):925–8.
 9.
Kaschube M, Schnabel M, Lowel S, Coppola DM, White LE, Wolf F. Universality in the evolution of orientation columns in the visual cortex. Science. 2010;330(6007):1113–6.
 10.
Bosking WH, Zhang Y, Schofield B, Fitzpatrick D. Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. J Neurosci. 1997;17(6):2112–27.
 11.
Swindale NV. A model for the formation of orientation columns. Proc R Soc Lond B, Biol Sci. 1982;215(1199):211–30.
 12.
Miller KD. π = visual cortex. Science. 2010;330:1059–60.
 13.
Yu H, Farley BJ, Jin DZ, Sur M. The coordinated mapping of visual space and response features in visual cortex. Neuron. 1995;47(2):267–80.
 14.
Reichl L, Heide D, Lowel S, Crowley JC, Kaschube M, Wolf F. Coordinated optimization of visual cortical maps (I) symmetrybased analysis. PLoS Comput Biol. 2012;8(11):e1002466.
 15.
Petitot J. Neurogéométrie de la vision: modeles mathematiques et physiques des architectures fonctionnelles. Paris: Editions Ecole Polytechnique; 2008.
 16.
Barbieri D, Citti G, Sanguinetti G, Sarti A. An uncertainty principle underlying the functional architecture of V1. J Physiol (Paris). 2012;106(5):183–93.
 17.
Swindale NV. The development of topography in the visual cortex: a review of models. Netw Comput Neural Syst. 1996;7(2):161–247.
 18.
Chalupa LM, Werner JS, editors. The visual neurosciences. Vol. 1. Cambridge: MIT Press; 2004.
 19.
Nauhaus I, Nielsen KJ. Building maps from maps in primary visual cortex. Curr Opin Neurobiol. 2014;24:1–6.
 20.
Swindale N, Shoham D, Grinvald A, Bonhoeffer T, Hubener M. Visual cortex maps are optimized for uniform coverage. Nat Neurosci. 2000;3:822–6.
 21.
Reichl L, Heide D, Lowel S, Crowley JC, Kaschube M, et al.. Coordinated optimization of visual cortical maps (II) numerical studies. PLoS Comput Biol. 2012;8(11):e1002756.
 22.
Nauhaus I, Nielsen K, Disney A, Callaway E. Orthogonal microorganization of orientation and spatial frequency in primate primary visual cortex. Nat Neurosci. 2012;15:1683–90.
 23.
Wolf F, Geisel T. Spontaneous pinwheel annihilation during visual development. Nature. 1998;395(6697):73–8.
 24.
Kaschube M, Wolf F, Geisel T, Lowel S. Genetic influence on quantitative features of neocortical architecture. J Neurosci. 2002;22(16):7206–17.
 25.
Wolf F, Geisel T. Universality in visual cortical pattern formation. J Physiol (Paris). 2003;97(2):253–64.
 26.
Nauhaus I, Busse L, Carandini M, Ringach D. Stimulus contrast modulates functional connectivity in visual cortex. Nat Neurosci. 2008;12(1):70–6.
 27.
Adler RJ, Taylor JE. Random fields and geometry. Berlin: Springer; 2009.
 28.
Abrahamsen P. A review of Gaussian random fields and correlation functions. 2nd ed. Oslo (Norway): Norsk Regnesentral; 1997 Apr. Report No.: 917. 64 p.
 29.
Azaïs JM, Wschebor M. Level sets and extrema of random processes and fields. New York: Wiley; 2009.
 30.
Schnabel M. A symmetry of the visual world in the architecture of the visual cortex [PhD thesis]. [Goettingen (Germany)]: University of Goettingen; 2008.
 31.
Wolf F. Symmetry, multistability, and longrange interactions in brain development. Phys Rev Lett. 2005;95:208701.
 32.
Yaglom AM. Secondorder homogeneous random fields. In: Proceedings of the fourth Berkeley symposium on mathematical statistics and probability. Vol. 2, Contributions to probability theory. Berkeley: University of California Press; 1961.
 33.
Niebur E, Worgotter F. Design principles of columnar organization in visual cortex. Neural Comput. 1994;6(4):602–14.
 34.
Koulakov A, Chklovskii D. Orientation preference patterns in mammalian visual cortex: a wire length minimization approach. Neuron. 2001;29(2):519–27.
 35.
Paik SB, Ringach D. Retinal origin of orientation maps in visual cortex. Nat Neurosci. 2011;14:919–25.
 36.
Berry MV, Dennis MR. Phase singularities in isotropic random waves. Proc R Soc Lond A, Math Phys Sci. 2000;456(2001):2059.
 37.
Azaïs JM, León JR, Wschebor M. Rice formulae and Gaussian waves. Bernoulli. 2011;17(1):170–93.
 38.
Bohr HA. Almost periodic functions. New York: Chelsea; 1947.
 39.
Kaschube M, Schnabel M, Wolf F. Selforganization and the selection of pinwheel density in visual cortical development. New J Phys. 2008;10(1):015009.
 40.
Zhang K, Sejnowski TJ. Neuronal tuning: to sharpen or broaden? Neural Comput. 1999;11(1):75–84.
 41.
Harris CM, Wolpert DM. Signaldependent noise determines motor planning. Nature. 1998;394(6695):780–4.
 42.
Cramer H, Leadbetter MR. Stationary and related stochastic processes. Sample function properties and their applications. New York: Wiley; 1967. Reprint: Dover books, 2004.
 43.
Ernst UA, Pawelzik KR, SaharPikielny C, Tsodyks MV. Intracortical origin of visual maps. Nat Neurosci. 2001;4(4):431–6.
 44.
Maffei L, GalliResta L. Correlation in the discharges of neighboring rat retinal ganglion cells during prenatal life. Proc Natl Acad Sci USA. 1990;87(7):2861–4.
 45.
Dragoi V, Rivadulla C, Sur M. Foci of orientation plasticity in visual cortex. Nature. 2001;411(6833):80–6.
Acknowledgements
My doctoral advisor Daniel Bennequin supplied many suggestions, constant support, and very welcome help in writing this note: I owe him many warm thanks. It is a pleasure to thank Fred Wolf for useful discussion; I am also grateful to Christine Cachot for her help in setting up numerical experiments. The contents of this paper benefited greatly from the remarks of two anonymous referees: let me also thank them here.
Author information
Additional information
Competing Interests
The author declares that he has no competing interests.
Appendix
Appendix
A.1 Sampling from Monochromatic Invariant Random Fields
In the main text, we defined monochromatic invariant Gaussian random fields through their correlation functions, and we studied the difference between monochromatic invariant fields and general invariant Gaussian fields. On Fig. 2 we displayed an OPM sampled from a monochromatic invariant Gaussian field, but we did not say how the drawn object was built from its correlation function. We provide some details in this subsection.
Recall that the covariance function of a monochromatic invariant Gaussian random field with correlation wavelength Λ is provided by the inverse Fourier transform of the Dirac distribution on a circle, that is,
with \(R = \frac{2\pi}{\varLambda}\). Now, Γ satisfies the Helmholtz equation \(\Delta\varGamma= R^{2} \varGamma\); from this we can easily deduce that
is identically zero. This means that any (strictly speaking, almost any) orientation map drawn from z satisfies itself the Helmholtz equation; thus OPMs drawn from z are superpositions of plane waves with wavenumber R and various propagation directions.
Thus, we know that there is a random Gaussian measure \(d\mathbb{Z}\) on the circle which allows for describing z as a stochastic integral:
Now, from the Gaussian nature of z and the Euclidean invariance condition, we have a simple way to describe ℤ, which we used for actual computations: if \((\zeta_{k})_{k \in \mathbb {N}^{\star}}\) is a sequence of independent standard Gaussian complex random variables, and if \(u_{1}, \ldots, u_{n}\) are the complex numbers coding for the vertices of a regular ngon inscribed in the unit circle, then
is a Gaussian random field. As n grows to infinity, we get random fields which are closer and closer to a monochromatic invariant Gaussian random field, and our field z is but the limiting field.
A.2 Kac–Rice Formula
We derived Result 1 from the classical Kac–Rice formula, and the theorem from which we obtained Result 2 can be obtained from a suitable generalization to plane random fields (see [29, 37] and [25, 27, 36] in the main text). Here we give the precise theorem we used in the derivation of Result 1. This formula was obtained as early as 1944, though the road to a complete proof later proved sinuous; the initial motivation on Rice’s side was the study of noise in communication channels, which can be thought of as random functions of time. For modeling noise it is then reasonable to introduce Gaussian random fields defined on the real line, and if the properties of the communication channel do not change over time, to assume further that they are stationary. Rice discovered that there is a very simple formula for the mean number of times this kind of field crosses a given “noise level”; this is the
Classical Kac–Rice formula
Consider a stationary Gaussian Random Field Φ defined on the real line, with smooth trajectories; choose a real number u, and consider an interval I of length ℓ on the real line. Write \(\mathcal{N}_{u, I}\) for the random variable recording the number of points x on I where \(\varPhi (x) = u\); then
where \(\lambda= \mathbb{E} [\varPhi'(0)^{2} ]\) is the second spectral moment of the field.
For the proof of this old formula, as well as a presentation of all the features of GRFs underlying our main text, see [27, 42]. For the proof of the theorem we used for Result 2, which is much more recent, we refer to [25] and [37].
Just after Result 1, we mentioned that the comparison between \(\varLambda _{\mathrm{mean}}\) and \(\varLambda_{\mathrm{sq}}\) is an immediate consequence of Jensen’s inequality, so let us give its statement here. We start with a continuous probability distribution ℙ on the real line. Now whenever φ is a convex, realvalued function on ℝ, Jensen’s inequality is the fact that, for each measurable function f,
(compare the triangle inequality). In the main text, we used it when ℙ is the power spectrum distribution of a random field, f is the identity function of ℝ, and φ is \(x \mapsto x^{2}\).
Rights and permissions
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
About this article
Received
Accepted
Published
DOI
Keywords
 Visual cortex
 Orientation hypercolumns
 Column spacing
 Pinwheel density
 Gaussian random fields
 Kac–Rice formula