 Research
 Open Access
 Published:
Drift–diffusion models for multiplealternative forcedchoice decision making
The Journal of Mathematical Neuroscience volume 9, Article number: 5 (2019)
Abstract
The canonical computational model for the cognitive process underlying twoalternative forcedchoice decision making is the socalled drift–diffusion model (DDM). In this model, a decision variable keeps track of the integrated difference in sensory evidence for two competing alternatives. Here I extend the notion of a drift–diffusion process to multiple alternatives. The competition between n alternatives takes place in a linear subspace of \(n1\) dimensions; that is, there are \(n1\) decision variables, which are coupled through correlated noise sources. I derive the multiplealternative DDM starting from a system of coupled, linear firing rate equations. I also show that a Bayesian sequential probability ratio test for multiple alternatives is, in fact, equivalent to these same linear DDMs, but with timevarying thresholds. If the original neuronal system is nonlinear, one can once again derive a model describing a lowerdimensional diffusion process. The dynamics of the nonlinear DDM can be recast as the motion of a particle on a potential, the general form of which is given analytically for an arbitrary number of alternatives.
Introduction
Perceptual decisionmaking tasks require a subject to make a categorical decision based on noisy or ambiguous sensory evidence. A computationally advantageous strategy in doing so is to integrate the sensory evidence in time, thereby improving the signaltonoise ratio. Indeed, when faced with two possible alternatives, accumulating the difference in evidence for the two alternatives until a fixed threshold is reached is an optimal strategy, in that it minimizes the mean reaction time for a desired level of performance. This is the computation carried out by the sequential probability ratio test devised by Wald [1], and its continuoustime variant, the drift–diffusion model (DDM) [2]. It would be hard to overstate the success of these models in fitting psychophysical data from both animals and human subjects in a wide array of tasks, e.g. [2,3,4,5], suggesting that brain circuits can implement a computation analogous to the DDM.
At the same time, neuroscientists have characterized the neuronal activity in cortical areas of monkey, which appear to reflect an integration process during DM tasks [6], although see [7]. The relevant computational building blocks, as revealed from decades of invivo electrophysiology, seem to be neurons, the activity of which selectively increases with increasing likelihood for a given upcoming choice. Attractor network models, built on this principle of competing, selective neuronal populations, generate realistic performance and reaction times; they also provide a neuronal description which captures some salient qualitative features of the invivo data [8, 9].
While focus in the neuroscience community has been almost exclusively on twoalternative DM (although see [10]), from a computational perspective there does not seem to be any qualitative difference between two or more alternatives. In fact, in a model, increasing the number of alternatives is as trivial as adding another neuronal population to the competition. On the other hand, how to add an alternative to the DDM framework does not seem, on the face of things, obvious. Several groups have sought to link the DDM and attractor networks for twoalternative DM. When the attractor network is assumed linear, one can easily derive an equation for a decision variable, representing the difference in the activities of the two competing populations, which precisely obeys a DDM [11]. For threealternative DM previous work has shown that a 2D diffusion process can be defined by taking appropriate linear combinations of the three input streams [12]. The general nalternative case for leaky accumulators has also been treated [13]. In the first section of the paper I will summarize and build upon this previous work to illustrate how one can obtain an equivalent DDM, starting from a set of linear firing rate equations which compete through global inhibitory feedback. The relevant decision variables are combinations of the activity of the neuronal populations, and which represent distinct modes of competition. Specifically, I will propose a set of “competition” basis functions which allow for a simple, systematic derivation of the DDMs for any n. I will also show how a Bayesian implementation of the the multiple sequential probability ratio test (MSPRT) [14,15,16] is equivalent in the continuum limit to these same DDMs, but with a moving threshold.
Of course, linear models do not accurately describe the neuronal data from experiments on DM. However, previous work has shown that attractor network models for twoalternative DM operate in the vicinity of pitchfork bifurcation, which is what underlies the winnertakeall competition leading to the decision dynamics [17]. In this regime the neuronal dynamics is well described by a stochastic normalform equation which right at the bifurcation is precisely equivalent to the DDM with an additional cubic nonlinearity. This nonlinear DDM fits behavioral data extremely well, including both correct and error reaction times. In the second part of the paper I will show how such normalform equations can be derived for an arbitrary number of neuronal populations. These equations can be thought of as nonlinear DDMs and, in fact, are identical to the linear DDMs with the addition of quadratic nonlinearities (for \(n>2\)). Amazingly, the dynamics of such a nonlinear DDM can be recast as the diffusion of particle on a potential, which is obtained analytically, for arbitrary n.
Results
The canonical drift diffusion model (DDM) can be written
where X is the decision variable, and μ is the drift or the degree of evidence in favor of one choice over the other: we can associate choice 1 with positive values of X and choice 2 with negative values. The Gaussian process \(\xi (t)\) represents noise and/or uncertainty in the integration process, with \(\langle \xi (t)\rangle = 0\) and \(\langle \xi (t)\xi (t^{\prime })\rangle =\sigma ^{2}\delta (tt ^{\prime })\). I have also explicitly included a characteristic time scale τ, which will appear naturally if one derives Eq. (1) from a neuronal model. The decision variable evolves until reaching one of two boundaries ±θ at which point a decision for the corresponding choice has been made.
It is clear that a single variable can easily be used to keep track of two competing processes by virtue of its having two possible signs. But what if there are three or more alternatives? In this case it is less clear. In fact, if we consider a drift–diffusion process such as the one in Eq. (1) as an approximation to an actual integration process carried out by neuronal populations, then there is a systematic approach to deriving the corresponding DDM. The value of such an approach is that one can directly tie the DDM to the neuronal dynamics, thereby linking behavior to neuronal activity.
I will first consider the derivation of a DDM starting from a system of linear firing rate equations. This analysis is similar to that found in Sect. 4.4 of [13], although the model of departure is different. In this case the derivation involves a rotation of the system so as to decouple the linear subspace for the competition between populations from the subspace which describes noncompetitive dynamical modes. This rotation is equivalent to expressing the firing rates in terms of a set of orthogonal basis functions: one set for the competition, and another for the noncompetitive modes. I subsequently consider a system of nonlinear firing rate equations. In this case one can once again derive a reduced set of equations to describe the decisionmaking dynamics. The reduced models have precisely the form of the corresponding DDM for a linear system, but now with additional nonlinear terms. These terms reflect the winnertakeall dynamics which emerge in nonlinear systems with multistability. Not only do the equations have a simple, closedform solution for any number of alternatives, but they can be succinctly expressed in terms of a multivariate potential.
Derivation of a DDM for twoalternative DM
The DDM can be derived from a set of linear equations which model the competition between two populations of neurons, the activity of each of which encodes the accumulated evidence for the corresponding choice. The equations are
where \(r_{I}\) represents the activity of a population of inhibitory neurons. The parameter s represents the strength of excitatory selfcoupling and c is the strength of the global inhibition. The characteristic time constants of excitation and inhibition are τ and \(\tau _{I}\) respectively. A choice is made for 1 (2) whenever \(r_{1} = r_{th} \) (\(r_{2} = r_{th} \)).
It is easier to work with the equations if they are written in matrix form, which is
where
In order to derive the DDM, I express the firing rates in terms of three orthogonal basis functions: one which represents a competition between the populations \(\mathbf{e}_{1}\), a second for common changes in population rates \(\mathbf{e}_{c}\) and a third which captures changes in the activity of the inhibitory cells \(\mathbf{e}_{I}\). Specifically I write
where \(\mathbf{e}_{1} = (1,1,0)\) and \(\mathbf{e}_{C} = (1,1,0)\) and \(\mathbf{e}_{I} = (0,0,1)\). The decision variable will be X, while \(\mathrm{m}_{C}\) and \(\mathrm{m}_{I}\) stand for the common mode and inhibitory mode respectively.
The dynamics of each of these modes can be isolated in turn by projecting Eq. (2) onto the appropriate eigenvector. For example, the dynamics for the decision variable X are found by projecting onto \(\mathbf{e}_{1}\), namely
and similarly the dynamics for \(\mathrm{m}_{C}\) and \(\mathrm{m}_{I}\) and found by projecting onto \(\mathbf{e}_{C}\) and \(\mathbf{e}_{I}\) respectively. Doing so results in the set of equations
If the selfcoupling sits at the critical value \(s = 1\), then these equations simplify to
from which it is clear that the equation for X describes a drift–diffusion process. It is formally identical to Eq. (1) with \(\mu = \frac{I_{1}I_{2}}{2}\) and \(\xi (t) = \frac{\xi _{1}(t)\xi _{2}(t)}{2}\). Importantly, X is uncoupled from the common and inhibitory modes, which themselves form a coupled subsystem. For \(s\ne 1\) the decision variable still decouples from the other two equations, but the process now has a leak term (or ballistic for \(s>1\)) [18]. It has therefore been argued that obtaining a DDM from linear neuronal models requires fine tuning, a drawback which can be avoided in nonlinear models in which the linear dynamics is approximated via multistability; see e.g. [19, 20]. If one ignores the noise terms, the steady state of this linear system is \((X,\mathrm{m}_{C},\mathrm{m}_{I}) = (X_{0},\mathrm{M}_{c},\mathrm{M} _{I})\) where
One can study the stability of this solution by considering a perturbation of the form \((X,\mathrm{m}_{C},\mathrm{m}_{I}) = (X_{0}, \mathrm{M}_{c},\mathrm{M}_{I})+(\delta X,\delta \mathrm{m}_{C},\delta \mathrm{m}_{I})e^{\lambda t}\). Plugging this into Eq. (7) one finds that there is a zero eigenvalue associated with the decision variable, i.e. \(\lambda _{1} = 0\), whereas the eigenvalues corresponding to the subsystem comprising the common and inhibitory modes are given by
which always have a negative real part. Therefore, as long as \(\tau _{I}\) is not too large, perturbations in the common mode or in the inhibition will quickly decay away. This allows one to ignore their dynamics and assume they take on their steadystate values. Finally, the bounds for the decision variable are found by noting that \(r_{1} = X+ \mathrm{M}_{C}\) and \(r_{2} = X+\mathrm{M}_{C}\). Therefore, given that the neuronal threshold for a decision is defined as \(r_{th} \), we find that \(\theta = \pm (r_{th}\mathrm{M}_{C})\).
Derivation of a DDM for threealternative DM
I will go over the derivation of a drift–diffusion process for threechoice DM in some detail for clarity, although conceptually it is very similar to the twochoice case. Then the derivation can be trivially extended to nalternative DM for any n.
The linear rate equations are once again given by Eq. (3), with
One once again writes the firing rates in terms of orthogonal basis functions, of which there must now be four. The common and inhibitory modes are the same as before, whereas now there will be two distinct modes to describe the competition between the three populations, in contrast to just a single decision variable. Any orthogonal basis in the 2D space for competition is equally valid. However, in order to make the choice systematic, I assume that the first vector is just the one from the twoalternative case, namely \((1,1,0,0)\), from which it follows that the second must be (up to an amplitude) \((1,1,2,0)\). Then for n alternatives I will always take the first \(n2\) basis vectors to be those from the \(n1\) case. The last eigenvector must be orthogonal to these. Specifically, for \(n = 3\), \(\mathbf{r} = \mathbf{e}_{1}X _{1}(t)+\mathbf{e}_{2}X_{2}(t)+\mathbf{e}_{C}\mathrm{m}_{C}(t)+ \mathbf{e}_{I}\mathrm{m}_{I}\), where
One projects Eq. (3) onto the four relevant eigenvectors, which leads to the following equations:
When \(s = 1\) then the first two equations in Eq. (11) describe a drift–diffusion process in a twodimensional subspace, while the coupled dynamics of the common and inhibitory modes are once again strongly damped. The DDM for threealternative DM can therefore be written
Note that the dynamics of the two decision variables \(X_{1}\) and \(X_{2}\) are coupled through the correlation in their noise sources. The decision boundaries are set by noting that
Therefore, given that the neuronal threshold for a decision is defined as \(r_{th}\) we can set three decision boundaries: 1 Population 1 wins if \(X_{2} = X_{1} +r_{th}\mathrm{M}_{C}\), 2 Population 2 wins if \(X_{2} = X_{1}+r_{th}\mathrm{M}_{C}\) and 3 Population 3 wins if \(X_{2} = (r_{th}\mathrm{M}_{C})/2\). These three boundaries define a triangle in (X,Y)space over which the drift–diffusion process take place.
Derivation of DDMs for nalternative DM
The structure of the linear rate equations Eq. (3) can be trivially extended to any number of competing populations. In order to derive the corresponding DDM one need only properly define the basis functions for the firing rates, which was described above. The common and inhibitory modes always have the same structure. If the basis functions are \(\mathbf{e}_{i}\) and the corresponding decision variables \(X_{i}\) for \(i = [1,n1]\), then the firing rates are \(\mathbf{r} = \sum_{i=1}^{n1}\mathbf{e}_{i}X_{i}(t)\) and it is easy to show that the dynamics for the kth decision variable is given by
as long as \(s = 1\). The decision boundaries are defined by setting the firing rates equal to their threshold value for a decision, i.e.
where \(\mathbf{u} = (1,1,\ldots,1)\).
The basis set proposed here for nalternative DM is to take for the kth eigenvector
where the element −k appears in the \((k+1)\)st spot, and which is a generalization of the eigenvector basis taken earlier for two and threealternative DM. With this choice, the equation for the kth decision variable can be written
The firing rate for the ith neuronal population can then be expressed in terms of the decision variables as
which, given a fixed neuronal threshold \(r_{th}\), directly gives the bounds on the decision variables. Namely, the ith neuronal population wins when \((i1)x_{i1}+\sum_{l = i}^{n1}x_{l}+\mathrm{M}_{C} > r _{th}\).
The nalternative DDM reproduces the wellknown Hick’s law [21], which postulates that the mean reaction time (RT) increases as the logarithm of the number of alternatives, for fixed accuracy; see Fig. 1.
DDM for multiple alternatives as the continuoustime limit of the MSPRT
In the previous section I have illustrated how to derive DDMs starting from a set of linear differential equations describing an underlying integration process of several competing data streams. An alternative computational framework would be to apply a statistical test directly to the data, without any pretensions with regards to a neuronal implementation. In fact, the development of an optimal sequential test for multiple alternatives followed closely on the heels of Wald’s work for two alternatives in the 1940s [22]. Subsequent work proposed a Bayesian framework for a multiple sequential probability ratio test (MSPRT) [14,15,16]. Here I will show how such a Bayesian MSPRT is equivalent to a corresponding DDM in the continuum limit, albeit with moving thresholds.
I will follow the exposition from [16] in setting up the problem, although some notation differs for clarity. I assume that there are n possible alternatives, and that the instantaneous evidence for an alternative i is given by \(z_{i}(t)\), which is drawn from a Gaussian distribution with mean \(I_{i}\) and variance \(\sigma ^{2}\). The total observed input over all alternatives n and up to a time t is written I and the hypothesis that alternative i has the highest mean \(H_{i}\). Then, using Bayes theorem, the probability that alternative i has the highest mean given the total observed input is
Furthermore one has \(\Pr (I) = \sum_{k=1}^{n}\Pr (IH_{k})\Pr (H_{k})\). Given equal priors on the different hypotheses, Eq. (20) can be simplified to
Finally, the loglikelihood of the alternative i having the largest mean up to a time t is \(L_{i}(t) = \ln {\Pr (H_{i}I)}\), which is given by
The choice of Gaussian distributions for the input leads to a simple form for the loglikelihoods. Specifically, the time rateofchange of the loglikelihood for alternative i is given by
where \(y_{i}(t) = \int _{0}^{t}dt^{\prime }z_{i}(t^{\prime })\). Note that \(z_{i}(t) = I_{i}+\xi _{i}(t)\), where \(\xi _{i}\) is a Gaussian white noise process with zero mean and variance \(\sigma ^{2}\).
I can now write the \(L_{i}\) in terms of the orthogonal basis used in the previous section,
Formally projecting onto each mode in turn leads precisely to the DDMs of Eq. (18) (with \(\tau = 1\)) for the decision variables, while the dynamics for the common mode is
For Bayesoptimal behavior, a choice i should be chosen if the loglikelihood exceeds a given threshold, namely if
Note that the loglikelihood is always negative and hence does not represent a firing rate, as in the differential equations studied in the previous section. This does not pose a problem since we are simply implementing a statistical test. On the other hand, an important difference with the case studied previously is the fact that, for the neuronal models, the common mode was stable and converged to a fixed point. Therefore, the decision variable dynamics was equivalent to the original rate dynamics with a shift of threshold. Here, that is not the case. The common mode represents the normalizing effect of the logmarginal probability of the stimulus, which always changes in time. Specifically, if we assume that \(I_{l}>I_{i}\) for all i, namely that the mean of the distribution is greatest for alternative l, then the expected dynamics of the common mode at long times are
Therefore the DDMs are only equivalent to applying the Bayes theorem if the threshold is allowed to vary in time. In this case they are, in fact, mathematically identical and thus give the same accuracy and reaction time distributions, as shown in Fig. 2.
On the other hand, an equivalent DDM with a fixed threshold has worse accuracy and shorter reaction times. The way I choose the parameters for a fair comparison is to set the fixed threshold such that the mean reaction time is identical to the Bayesian model for zero coherence. Another important difference is that the error RTs are longer than the correct RTs for the Bayesian model, see Fig. 2B, an effect which is commonly seen in experiment (and is also reproduced by the nonlinear DDMs studied in the next section of the paper) [17]. On the other hand correct and error RTs for the DDMs are always the same.
Derivation of a reduced model for twochoice DM for a nonlinear system
A more realistic firing rate model for a decisionmaking circuit allows for a nonlinear inputoutput relationship in neuronal activity. For twoalternative DM the equations are
The nonlinear transfer function ϕ (\(\phi _{I}\)) does not need to be specified in the derivation. The noise sources \(\xi _{i}\) are taken to be Gaussian white noise and hence must sit outside of the transfer function; they therefore directly model fluctuations in the population firing rate rather than input fluctuations. Input fluctuations can be modeled by allowing for a nonwhite noise process and including it directly as an additional term in the argument of the transfer function. Note that here I assume the nonlinearity is a smooth function. This is a reasonable assumption for a noisy system such as a neuron or neuronal circuit. Nonsmooth systems, such as piecewise linear equations for DM, require a distinct analytical approach; see, e.g., [23].
The details of the derivation for two alternatives can be found in [17], but here I give a flavor for how one proceeds; the process will be similar when there are three or more choices, although the scaling of the perturbative expansion is different. One begins by ignoring the noise sources and linearizing Eq. (28) about a fixedpoint value for which the competing populations have the same level of activity, and hence also \(I_{1} = I_{2}\). Specifically one takes \((r_{1},r_{2},r_{I}) = (R,R,R_{I})+(\delta r_{1},\delta r_{2}, \delta r_{I})e^{\lambda t}\), where \(\delta r\ll 1\). In vector form this can be written \(\mathbf{r} = \mathbf{R}+\boldsymbol{\delta}\mathbf{r}e^{\lambda t}\). Plugging this ansatz into Eq. (28) and keeping only terms linear in the perturbations leads to the following system of linear equations:
where
and the slope of the transfer function \(\phi ^{\prime }\) is calculated at the fixed point. Note that the matrix Eq. (30) has a very similar structure to the linear operator in Eq. (3). This system of equations only has a solution if the determinant of the matrix is equal to zero; this yields the characteristic equation for the eigenvalues λ. These eigenvalues are
Note that \(\lambda _{1} = 0\) if \(1s\phi ^{\prime } = 0\), while the real part of the other two eigenvalues is always negative. This indicates that there is an instability of the fixed point in which the activity of the neuronal populations is the same, and that the direction of this instability can be found by setting \(1s\phi ^{\prime } = 0\) and \(\lambda = 0\) in Eq. (30). This yields , where
the solution of which can clearly be written \(\boldsymbol{\delta}\mathbf{r} = (1,1,0)\). This is the same competition mode as found earlier for the linear system.
A brief overview of normalform derivation
At this point it is still unclear how one can leverage this linear analysis to derive a DDM. Specifically, and unlike in the linear case, one cannot simply rotate the system to uncouple the competition dynamics from the noncompetitive modes. Also, note that the steady states in a nonlinear system depend on the external inputs, whereas that is not the case in a linear system. In particular, the DDM has a drift term μ which ought to be proportional to the difference in inputs \(I_{1}I_{2}\), whereas to perform the linear stability we assumed \(I_{1} = I_{2}\). Indeed, if one assumes that the inputs are different, then the fixedpoint structure is completely different. The solution is to assume that the inputs are only slightly different, and formalize this by introducing the small parameter ϵ. Specifically, we write \(I_{1} = I_{0}+\epsilon ^{2}\Delta I +\epsilon ^{3}\bar{I}_{1}\), and \(I_{2} = I_{0}+\epsilon ^{2}\Delta I +\epsilon ^{3}\bar{I}_{2}\). In this expansion, \(I_{0}\) is the value of the external input which places the system right at the bifurcation in the zerocoherence case. In order to describe the dynamics away from the bifurcation we also allow the external inputs to vary. Specifically, ΔI represents the component of the change in input which is common to both populations (overall increased or decreased drive compared to the bifurcation point), while \(\bar{I}_{i}\) is a change to the drive to population i alone, and hence captures changes in the coherence of the stimulus. The particular scaling of these terms with ϵ is enforced by the solvability conditions which appear at each order. That is, the mathematics dictates what these are; if one chose a more general scaling one would find that only these terms would remain.
The firing rates are then also expanded in orders of ϵ and written
where \(\mathbf{r_{0}}\) are the fixedpoint values, \(\mathbf{e_{1}}\) is the eigenvector corresponding to the zero eigenvalue and X is the decision variable which evolves on a slowtime scale, \(T = \epsilon ^{2}t\). The slowtime scale arises from the fact that there is an eigenvector with zero eigenvalue; when we change the parameter values slightly, proportional to ϵ, the growth rate of the dynamics along that eigenvector is no longer zero, but still very small, in fact proportional to \(\epsilon ^{2}\) in this case.
The method for deriving the normalform equation, i.e. the evolution equation for X, involves expanding Eq. (28) in ϵ. At each order in ϵ there is a set of equations to be solved; at some orders, in this case first at order \(\mathcal{O}(\epsilon ^{3})\), the equations cannot be solved and a solvability condition must be satisfied, which leads to the normalform equation.
The normalform equation for two choices
Following the methodology described in the preceding section leads to the evolution equation for the decision variable X,
where for the case of Eq. (28), \(\eta = \phi ^{\prime }/2\), \(\xi (t) = (\xi _{1}(t)\xi _{2}(t))/2\) and the coefficients μ and γ are
see [17] for a detailed calculation. Equation (35) provides excellent fits to performance and reaction times for monkeys and human subjects; see Fig. 3 from [17].
It is important to note that the form of Eq. (35) only depends on there being a twoway competition, not on the exact form of the original system. As an example, consider another set of firing rate equations
where rather than model the inhibition explicitly, an effective inhibitory interaction between the two populations is assumed. In this case the resulting normalform equation is still Eq. (35). In fact, performing a linear stability analysis on Eq. (37) yields a null eigenvector \(e_{1} = (1,1)\). This indicates that the instability causes one population to grow at the expense of the other, in a symmetric fashion, as before. This is the key point which leads to the normalform equation. More specifically we see that for both systems \(r_{1} = R+X\) while \(r_{2} = RX\), which means that if we flip the sign on the decision variable X and switch the labels on the neuronal populations, the dynamics is once again the same. This reflection symmetry ensures that all terms in X will have odd powers in Eq. (35) [17]. It is broken only when the inputs to the two populations are different, i.e. by the first term on the r.h.s. in Eq. (35).
As we shall see, the stochastic normalform equation, Eq. (35), which from now on I will refer to as a nonlinear DDM, has a very different form from the nonlinear DDMs for \(n>2\). The reason is, again, the reflection symmetry in the competition subspace for \(n=2\), which is not present for \(n>2\). Therefore, for \(n>2\) the leadingorder nonlinearity is quadratic, and, in fact, a much simpler function of the original neuronal parameters.
Threealternative forcedchoice decision making
The derivation of the normal form, and the corresponding DDM for threechoice DM differs from that for twoalternative DM in several technical details; these differences continue to hold for nalternative DM for all \(n\ge 3\). Therefore I will go through the derivation in some detail here and will then extend it straightforwardly to the other cases.
Again I will make use of a particular system of firing rate equations to illustrate the derivation. I take a simple extension of the firing rate equations for twoalternative DM Eq. (28). The equations are
I first ignore the noise terms and consider the linear stability of perturbations of the state in which all three populations have the same level of activity (and so \(I_{1} = I_{2} = I_{3} = I_{0}\)), i.e. \(\mathbf{r} = \mathbf{R}+\boldsymbol{\delta}\mathbf{r}e^{\lambda t}\), where \(\mathbf{R} = (R,R,R,R_{I})\). This once again leads to a set of linear equations . The fourthorder characteristic equation leads to precisely the same eigenvalues as in the twochoice case, Eqs. (31) and (32), with the notable difference that the first eigenvalue has multiplicity two. This means that if \(1s\phi ^{\prime } = 0\) then there will be two eigenvalues identically equal to zero and two stable eigenvalues. This is the first indication that the integration process underlying the DM process for three choices will be twodimensional. The eigenvectors for the DM process are found by solving , where
There are many possible solutions; a simple choice would be \(\mathbf{e}_{1} = (1,1,0,0)\) and \(\mathbf{e}_{2} = (1,1,2,0)\), and so \(\mathbf{e}_{1}^{T}\cdot \mathbf{e}_{2} = 0\). Note that in this linear subspace any valid choice of eigenvector will have the property that the sum of all the elements will equal zero; this will be true whatever the dimensionality of the DM process and reflects the fact that all of the excitatory populations excite the inhibitory interneurons in equal measure.
To derive the normal form I once again assume that the external inputs to the three populations differ by a small amount, namely \((I_{1},I _{2},I_{3}) = (I_{0},I_{0},I_{0})+\epsilon ^{2}(\bar{I}_{1},\bar{I} _{2},\bar{I}_{3})\), and then expand the firing rates as \(\mathbf{r} = \mathbf{R}+\epsilon (\mathbf{e}_{1}X_{1}(T)+\mathbf{e}_{2}X_{2}(T) )+\mathcal{O}(\epsilon ^{2})\), where the slow time is \(T = \epsilon t\). Note that the inputs are only expanded to second order in ϵ, as opposed to third order as in the previous section. The reason is that the solvability condition leading to the normalform equation for the 2alternative case arises at third order. This is due to the fact that the bifurcation has a reflection symmetry, i.e. it is a pitchfork bifurcation and so only odd terms in the decision variable are allowed. The lowestorder nonlinear term is therefore the cubic one. On the other hand, for more than two alternatives there is no such reflection symmetry in the corresponding bifurcation to winnertakeall behavior. Therefore the lowestorder nonlinear term is quadratic, as in a saddlenode bifurcation.
I expand Eq. (38) in orders of ϵ. In this case a solvability condition first arises at order \(\epsilon ^{2}\), which also accounts for the different scaling of the slow time compared to twochoice DM. Note that there are two solvability conditions, corresponding to eliminating the projection of terms at that order onto both of the leftnull eigenvectors of . As before, the leftnull eigenvectors are identical to the rightnull eigenvectors. Applying the solvability condition yields the normalform equations
The nonlinear DDM Eq. (40) provides an asymptotically correct description of the full dynamics in Eq. (38) in the vicinity of the bifurcation leading to the decisionmaking behavior. Figure 3(A) shows a comparison of the firing rate dynamics with the nonlinear DDM right at the bifurcation. The appropriate combinations of the two decision variables \(X_{1}\) and \(X_{2}\) clearly track the rate dynamics accurately, including the correct choice (here population 2) and reaction time. The nonlinear drift–diffusion process evolves in a triangular section of the plane; see Fig. 3(B).
A note on the difference between the nonlinear DDM for 2A and 3A DM
The dynamics of the nonlinear DDM for 2A, Eq. (35), depends strongly on the sign of the cubic coefficient γ. Specifically, when \(\gamma < 0\) the bifurcation is supercritical, while for \(\gamma > 0\) it is subcritical, indicating the existence of a region of multistability for \(\Delta I < 0\). In fact, in experiment, cells in parietal cortex which exhibit ramping activity during perceptual DM tasks, also readily show delay activity in anticipation of the sacade to their response field [24]. One possible mechanism for this would be precisely this type of multistability. When \(\Delta I = 0\), i.e. when the system sits squarely at the bifurcation, Eq. (35) is identical to its linear counterpart with the sole exception of the cubic term. For \(\gamma < 0\) the state \(X = 0\) is stabilized. In fact, the dynamics of the decision variable can be viewed as the motion of a particle in a potential, which for \(\gamma < 0\) increases rapidly as X grows, pushing the particle back. On the other hand, for \(\gamma > 0\) the potential accelerates the motion of X, pushing it off to ±∞. This is very similar to the potential for the linear DDM with absorbing boundaries. Therefore, the nonlinear DDM for twoalternatives is qualitatively similar to the linear DDM when it is subcritical, and hence when the original neuronal system is multistable.
On the other hand, the nonlinear DDM for three alternatives, Eq. (40), has a much simpler, quadratic nonlinearity. The consequence of this is that there are no stable fixed points and the decision variables always evolve to ±∞. Furthermore, to leading order there is no dependence on the mean input, indicating that the dynamics is dominated by the behavior right at the bifurcation.^{Footnote 1} The upshot is that Eq. (40) is as similar to the corresponding linear DDM with absorbing boundaries as possible for a nonlinear system without fine tuning. This remains true for all \(n>2\).
This also means that neuronal systems with inhibitionmediated winnertakeall dynamics are generically multistable for \(n>2\), although for \(n = 2\) they need not be. This is due to the reflection symmetry present only for \(n = 2\).
nalternative forcedchoice decision making
One can now extend the analysis for threealternative DM to the more general nchoice case. Again I start with a set of firing rate equations
A linear stability analysis shows that the eigenvalues of perturbations of the state \(\mathbf{r} = (R,R,\ldots ,R,R_{I})\) are given by Eqs. (31) and (32), where the first eigenvalue has multiplicity \(n1\). Therefore the decisionmaking dynamics evolves on a manifold of dimension \(n1\). The linear subspace associated with this manifold is spanned by \(n1\) eigenvectors which are mutually orthogonal and the elements of which sum to zero. For n alternatives, we take \(n1\) eigenvectors of the form \(e_{k} = (1,1,\dots ,1,k,0, \dots ,0)\), for the kth eigenvector (again, the −k sits in the (k+1)st spot). Therefore one can write
Following the same procedure as in the case of threechoice DM and applying the \(n1\) solvability conditions at order \(\mathcal{O}( \epsilon ^{2})\), one arrives at the following normalform equation for the kth decision variable for n alternatives:
It is illuminating to write this formula out explicitly for some of the decision variables:
where I have left off the noise terms for simplicity.
Surprisingly, the \(n1\) equations for the decision variables in nalternative DM can all be derived from a single, multivariate function:
For the system of firing rate equations studied here the parameters \(a = \phi ^{\prime }\) and \(b = s^{2}\phi ^{\prime \prime }\). Then the equation for the kth decision variable is simply
The dynamics of the function f is given by
where I have ignored the effect of noise. Therefore the dynamics of the decision variables can be thought of as the motion of a particle on a potential landscape, given by f. Noise sources lead to a diffusion along this landscape.
Discussion
In this paper I have illustrated how to derive drift–diffusion models starting from models of neuronal competition for nalternative decisionmaking tasks. In the case of linear systems, the derivation consists of nothing more than a rotation of the dynamics onto a subspace of competition modes. This idea is not new, e.g. [13], although I have made the derivation explicit here, and have chosen as a model of departure one in which inhibition is explicitly included as a dynamical variable. It turns out that a Bayesian implementation of a multiple sequential probability ratio test is also equivalent to a DDM in the continuum limit, albeit with timevarying thresholds.
For nonlinear systems, the corresponding DDM is a stochastic normal form, which is obtained here using the method of multiplescales [25]. The nonlinear DDM was obtained earlier for the special case of twoalternative DM [17]. For fouralternative DM the nonlinear DDM was obtained with a different set of competitive basis functions [26] to describe performance and reaction time from experiments with human subjects. This led to a different set of coupled normalform equations from those given by Eq. (42), although the resulting dynamics is, of course, the same. The advantage of the choice I have made in this paper for the basis functions, is that they are easily generalizable for any n, leading to a simple, closedform expression for the nonlinear DDM for any arbitrary number of alternatives, Eq. (42).
An alternative approach to describing the behavior in DM tasks, is to develop a statistical or probabilistic description of evidence accumulation; see, e.g., [1, 13, 15, 22, 27]. Such an approach also often leads to a drift–diffusion process in some limit, as is the case for the Bayesian MSPRT studied here, and see also [28]. In fact, recent work has shown that an optimal policy for multiplealternative decision making can be approximately implemented by an accumulation process with timevarying thresholds, similar to the Bayesian model studied in this manuscript [29]. From a neuroscience perspective, however, it is of interest to pin down how the dynamics of neuronal circuits might give rise to animal behavior which is well described by a drift–diffusion process. This necessitates the analysis of neuronal models at some level. What I have shown here is that the dynamics in a network of n neuronal populations which compete via a global pool of inhibitory interneurons, can in general be formally reduced to a nonlinear DDM of dimension \(n1\). The nonlinear DDMs differ from the linear DDMs through the presence of quadratic (or cubic for \(n = 2\)) nonlinearities which accelerate the winnertakeall competition. In practical terms this nonlinear acceleration serves the same role as the hard threshold in the linear DDMs. Therefore the two classes of DDMs have quite similar behavior.
The DDM is one of the mostused models for fitting data from twoalternative forcedchoice decisionmaking experiments. In fact it provides fits to decision accuracy and reaction time in a wide array of tasks, e.g. [2, 3, 30]. Here I have illustrated how the DDM can be extended to n alternatives straightforwardly. It remains to be seen if such DDMs will fit accuracy and reaction times as well as their twoalternative cousin, although one may refer to promising results from [12] for three alternatives. Note also that the form of the nonlinear DDMs, Eqs. (44) and (45) does not depend on the details of the original neuronal equations; this is what is meant by a normalform equation. The only assumptions needed for the validity of the normalform equations are that there be global, nonlinear competition between n populations. Of course, if the normal form is derived from a given neuronal model, then the parameters a and b of the nonlinear potential Eq. (44) will depend on the original neuronal parameters.
As stated earlier, the nonlinear DDMs can have dynamics quite similar to the standard, linear DDM with hard thresholds. Nonetheless, there are important qualitative differences between the two classes of models. First of all, both correct and error reactiontime distributions are identical in the linear DDMs, given unbiased initial conditions, whereas the nonlinear DDMs generically show longer error reaction times [17], also a feature of the Bayesian MSPRT. Because error reaction times in experiment indeed tend to be longer than correct ones, the linear DDM cannot be directly fit to data. Rather, variability in the drift rate across trials can be assumed in order to account for differences in error and correct reaction times; see, e.g., [30]. Secondly, nonlinear DDMs exhibit intrinsic dynamics which reflect the winnertakeall nature of neuronal models with strong recurrent connectivity. As a consequence, as the decision variables increase (or decrease) from their initial state, they undergo an acceleration which does not explicitly depend on the value of the external input. This means that the response of the system to fluctuations in the input is not the same late in a trial as it is early on. Specifically, later fluctuations will have lesser impact. Precisely this effect has been seen in the response of neurons in parietal area LIP in monkeys in twoalternative forcedchoice decisionmaking experiments; see Fig. 10B in [31]. Given that network models of neuronal activity driving decisionmaking behavior lead to nonlinear DDMs, fitting such models to experimental data in principle allows one to link behavioral measures to the underlying neuronal parameters. In fact, it may be that the linear DDM has been so successful in fitting behavioral data over the years precisely because it is a close approximation to the true nonlinear DDM which arises in neuronal circuits with winnertakeall dynamics.
Notes
 1.
Changing the mean input leads to higherorder corrections which can be calculated straightforwardly; see the Appendix.
Abbreviations
 DDM:

Drift Diffusion Model
 DM:

Decision Making
References
 1.
Wald A. Sequential tests of statistical hypotheses. Ann Math Stat. 1945;16:117–86.
 2.
Ratcliff R. A theory memory retrieval. Psychol Rev. 1978;85:59–108.
 3.
Ratcliff R, McKoon G. The diffusion decision model: theory and data for twochoice decision tasks. Neural Comput. 2008;20:873–922.
 4.
Shadlen MN, Kiani R. Decision making as a window on cognition. Neuron. 2013;80:791–806.
 5.
Kira S, Yang T, Shadlen MN. A neural implementation of Wald’s sequential probability test. Neuron. 2015;84:861–73.
 6.
Roitman JD, Shadlen MN. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci. 2002;22(21):9475–89.
 7.
Latimer KW, Yates JL, Meister MLR, Huk AC, Pillow JW. Singletrial spike trains in parietal cortex reveal discrete steps during decision making. Science. 2015;349:184–7.
 8.
Wang XJ. Pacemaker neurons for the theta rhythm and their synchronization in the septohippocampal reciprocal loop. J Neurophysiol. 2002;87:889–900.
 9.
Wong KF, Huk AC, Shadlen MN, Wang XJ. Neural circuit dynamics underlying accumulation of timevarying evidence during perceptual decisionmaking. Front Comput Neurosci. 2007:neuro.10.006.2007.
 10.
Churchland AK, Kiani R, Shadlen MN. Decisionmaking with multiple alternatives. Nat Neurosci. 2008;11:693–702.
 11.
Bogacz R, Brown E, Moehlis J, Holmes P, Cohen JD. The physics of optimal decision making: a formal analysis of models of performance in twoalternative forcedchoice tasks. Psychol Rev. 2006;113(4):700–65.
 12.
Niwa M, Ditterich J. Perceptual decisions between multiple directions of visual motion. J Neurosci. 2008;28:4435–45.
 13.
McMillen T, Holmes P. The dynamics of choice among multiple alternatives. J Math Psychol. 2006;50:30–57.
 14.
Baum CW, Veeravalli VV. A sequential procedure for multihypothesis testing. IEEE Trans Inf Theory. 1994;40:1996–2007.
 15.
Dragalin VP, Tertakovsky AG, Veeravalli VV. Multihypothesis sequential probability ratio tests—part I: asymptotic optimality. IEEE Trans Inf Theory. 1999;45:2448–61.
 16.
Bogacz R, Gurney K. The basal ganglia and cortex implement optimal decision making between alternative options. Neural Comput. 2007;19:442–77.
 17.
Roxin A, Ledberg A. Neurobiological models of twochoice decision making can be reduced to a onedimensional nonlinear diffusion equation. PLoS Comput Biol. 2008;4:1000046.
 18.
Usher M, McClelland JL. The time course of perceptual choice: the leaky, competing accumulator model. Psychol Rev. 2001;108:550–92.
 19.
Koulakov AA, Raghavachari S, Kepecs A, Lisman JE. Model for a robust neural integrator. Nat Neurosci. 2002;5:775–82.
 20.
Goldman MS, Levine JH, Major G, Tank DW, Seung HS. Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cereb Cortex. 2003;13:1185–95.
 21.
Hick WE. On the rate of gain of information. Q J Exp Psychol. 1952;4:11–26.
 22.
Armitage P. Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. J R Stat Soc, Ser B. 1950;12:137–44.
 23.
Bogacz R, Usher M, Zhang J, McClelland JL. Extending a biologically inspired model of choice: multialternatives, nonlinearity and valuebased multidimensional choice. Philos Trans R Soc Lond B, Biol Sci. 2007;362:1655–70.
 24.
Shadlen MN, Newsome WT. Neural basis of a perceptual decision in the parietal cortex (area lip) of the rhesus monkey. J Neurophysiol. 2001;86:1916–36.
 25.
Wiggins S. Introduction to applied nonlinear dynamical systems and chaos. Berlin: Springer; 2003.
 26.
Ueltzhoeffer K, ArmbrusterGenç DJN, Flebach CJ. Stochastic dynamics underlying cognitive stability and flexibility. PLoS Comput Biol. 2015;11:1004331.
 27.
Nguyen KP, Josić K, Kilpatrick ZP. Optimizing sequential decision in the drift–diffusion model. J Math Psychol. 2019;88:32–47.
 28.
Radillo AE, VelizCuba A, Josić K, Kilpatrick ZP. Evidence accumulation and change rate inference in dynamic environments. Neural Comput. 2017;29:1561–610.
 29.
Tajima S, Drugowitsch J, Patel N, Pouget A. Optimal policy for multialternative decisions. 2019;biorxiv:595843.
 30.
Ratcliff R, van Zandt T, McKoon G. Connectionist and diffusion models of reaction time. Psychol Rev. 1999;106:261–300.
 31.
Huk AC, Shadlen MN. Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making. J Neurosci. 2005;25(45):10420–36.
Acknowledgements
I acknowledge helpful conversations with Klaus Wimmer and valuable suggestions for improvements from the reviewers.
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Funding
Grant number MTM201571509C21R from the Spanish Ministry of Economics and Competitiveness. Grant 2014 SGR 1265 4662 for the Emergent Group “Network Dynamics” from the Generalitat de Catalunya. This work was partially supported by the CERCA program of the Generalitat de Catalunya.
Author information
Affiliations
Contributions
All authors read and approved the final manuscript.
Corresponding author
Correspondence to Alex Roxin.
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Competing interests
The author declares to have no competing interests.
Consent for publication
Not applicable
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
1.1 Derivation of the normal form for threealternative DM
I assume the inputs to the three competing neuronal populations in Eq. (38) are slightly different, namely \(I_{i} = I _{0}+\epsilon ^{2}\bar{I}_{i}\) for \(i = \{1,2,3\}\), which defines the small parameter \(\epsilon \ll 1\). The firing rates are expanded as \(\mathbf{r} = \mathbf{R}+\epsilon \mathbf{r}_{1}+\epsilon ^{2} \mathbf{r}_{2}+\mathcal{O}(\epsilon ^{3})\) where R are the values of the rates at the fixed point. Finally, we define a slow time \(T = \epsilon t\). Plugging the expansions for the inputs and rates into Eq. (38) and expanding leads to the following system of equations:
where to place the system at the bifurcation to winnertakeall behavior I choose \(1s\phi ^{\prime } = 0\). Then we have the linear operator
and
Now one solves the equations order by order. At \(\boldsymbol{\mathcal{O}(\epsilon )}\)
The solution can be written \(\mathbf{r}_{1} = \mathbf{e}_{1}X_{1}(T)+ \mathbf{e}_{2}X_{2}(T)\), where the rightnull eigenvectors of are \(\mathbf{e}_{1} = (1,1,0,0)\) and \(\mathbf{e}_{2} = (1,1,2,0)\). The leftnull eigenvectors, which satisfy are identical to the rightnull eigenvectors in this case. Up to \(\boldsymbol{\mathcal{O}(\epsilon ^{2})}\):
Given the solution \(r_{1}\), the forcing term has the form
This forcing term contains a projection in the leftnull eigenspace of . This projection cannot give rise to a solution in \(\mathbf{r}_{2}\) and so must be eliminated. The solvability conditions are therefore
which yield Eq. (40). Note that the resulting normal form has detuning or bias terms, proportional to differences in inputs; these would be called ‘drifts’ in the context of a DDM, but no term linearly proportional to the normalform amplitudes, or ‘decision variables’, as in the normal form for twochoice DM. Such a term describes the growth rate of the critical mode away from the bifurcation point: attracting to one side and repelling to the other. To derive these terms we will need to continue the calculation to next order. First, we must solve for \(\mathbf{r}_{2}\). We note that, while \(\mathbf{r}_{2}\) cannot have components in the directions \(\mathbf{e}_{1}\) or \(\mathbf{e}_{2}\), the components in the directions \(\mathbf{e}_{C} = (1,1,1,0)\) and \(\mathbf{e}_{I} = (0,0,0,1)\) (the common and inhibitory modes which we discussed in the section on DDMs) can be solved for.
Therefore we take \(\mathbf{r}_{2} = \mathbf{e}_{C}R_{2C}+\mathbf{e} _{I}R_{2I}\) and project
which yields
Up to \(\boldsymbol{\mathcal{O}(\epsilon ^{3})}\):
where
Applying the solvability conditions gives the equations
If we add the resulting terms to the normalform equations derived at \(\mathcal{O}(\epsilon ^{2})\) we have
1.1.1 Normal form for threechoice DM starting with a different neuronal model
If we consider a different neuronal model for threechoice DM we can still arrive at the same normalform equations. The only important factor is the presence of a linear subspace associated with a zero eigenvalue of multiplicity two and which is spanned by eigenvectors representing competition modes. As an example, consider the model
in which the inhibition is not explicitly modeled as before. A linear stability analysis reveals that there is a zero eigenvalue with multiplicity two when \(\frac{c\phi ^{\prime }}{2} = 1s\phi ^{\prime }\) and \(1s\phi ^{\prime }> 0\). Therefore, the instability mechanism here is the crossinhibition, while before it was the selfcoupling. Nonetheless, it is only the structure of the corresponding eigenvectors which matters. In this case the two eigenvectors with zero eigenvalue are \(\mathbf{e}_{1} = (1,1,0)\) and \(\mathbf{e}_{2} = (1,1,2)\). Following the methodology described above leads to the identical normalform equations Eq. (40) with s replaced by \(s+c/2\).
1.2 Normalform equations for nchoice DM
A neuronal model for nchoice DM \((n>3)\) is
I again consider small differences in the inputs, \(I_{i} = I_{0}+ \epsilon ^{2}\Delta I_{i}\) for \(i = [1,n]\), expand the rates as \(\mathbf{r} = \mathbf{R}+\epsilon \mathbf{r}_{1}+\mathcal{O}(\epsilon ^{2})\), where R are the rates at the fixed point, and define the slow time \(T = \epsilon t\). If we take \(1s\phi ^{\prime } = 0\) then the matrix of the linearized system has a zero eigenvalue with multiplicity \(n1\). Carrying out an expansion up to \(\mathcal{O}(\epsilon ^{2})\) as described above and applying the \(n1\) solvability conditions leads to the normalform equations Eq. (42).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Roxin, A. Drift–diffusion models for multiplealternative forcedchoice decision making. J. Math. Neurosc. 9, 5 (2019) doi:10.1186/s1340801900734
Received
Accepted
Published
DOI
Keywords
 Decision making
 Networks
 Winnertakeall