top of page

Why do we need variability parameters in the diffusion decision model?

In a previous blog post, I introduced the four main parameters of the Diffusion Decision Model (DDM) (Ratcliff, 1978): drift rate (v), boundary separation (a), starting point (z), and nondecision time (Ter). The DDM often also incorporates three sources of across-trial variability, each represented by a distinct parameter. When these variability parameters are included, the model is often referred to as the “full” DDM or the “Ratcliff DDM” (Ging-Jehli et al., 2021; Ratcliff & Smith, 2004). We also note that the term “diffusion decision model” typically implies the inclusion of noisy diffusion processes that include within-trial variability, indicating that evidence accumulation is stochastic. In contrast, models like the “linear ballistic accumulator” (LBA) assume a deterministic accumulation process. I will explore the similarities and differences between these and other sequential sampling models in an upcoming blog post. For interested readers, I’m referring to the following great sources: (Brown & Heathcote, 2008; Donkin et al., 2011)

 

The three sources of variability in the DDM

There are three sources of variability that can be modeled across trials: variability in drift rate (denoted as “η”), variability in the starting point (denoted as “sz”), and variability in nondecision time (denoted as “st”). These parameters are used to capture the inherent fluctuations in cognitive processing that occur from one trial to the next. Specifically, even when presented with the identical stimulus twice, our decision-making process may differ each time due to the imperfect and inherently noisy nature of information integration in the brain (Forstmann & Wagenmakers, 2015; Ratcliff & Smith, 2004; Smith & Ratcliff, 2004). The graphic below illustrates these variabilities, showing how they influence the decision process across different trials.


Assuming, without loss of generality, that responses terminating at the upper boundary represent correct answers and those at the lower boundary represent errors, it’s worth revisiting some historical context about the DDM (see also my previous introductory blog about the DDM). Before Roger Ratcliff unified existing ideas into the DDM in 1978, earlier accumulator models struggled with a significant limitation: they simulated errors and correct responses as occurring at the same speed. However, empirical observations often show that response time (RT) distributions for different decision categories can vary significantly. For example, in discrimination tasks, errors are frequently slower than correct responses, although under certain conditions, the reverse can be true. Specifically, when discriminability is high and speed is emphasized, error RTs tend to be shorter than those for correct responses. Conversely, when discriminability is low and accuracy is prioritized, error RTs are longer than those for correct responses (Ratcliff & Smith, 2004). Ratcliff addressed this discrepancy by incorporating variability parameters into the DDM, successfully capturing this empirically observed phenomenon.

 

Across-trial variability in drift rate can account for slower errors than corrects

Across-trial variability in drift rate (η) reflects changes in the speed and consistency with which information is processed across trials. Incorporating this variability parameter into a model enhances its ability to account for the empirical observation that errors can be slower than correct responses. By assuming that drift rates vary across trials, we recognize that the decision-making process in each trial is influenced by different rates of information accumulation. Consequently, the resulting RT distributions are probability mixtures, composed of trials with varying drift rates - some high, some low. The overall mean RT for a given decision or response is thus a weighted average of RTs from trials with both high and low drift rates, accurately reflecting the diversity in decision-making speed and accuracy observed across trials (Ratcliff & Smith, 2004; Van Zandt & Ratcliff, 1995).

 

Across-trial variability in starting point can account for faster errors than corrects

Starting point variability (sz) accounts for differences in the bias or predisposition towards one decision boundary over another at the start of each trial. Variability in starting point is often driven by sequential effects, where the speed and accuracy of responses are influenced by the stimuli and responses on preceding trials. Incorporating this variability parameter into a model enhances its ability to account for the empirical observation that errors can be faster than correct responses (Ratcliff et al., 1999; Van Zandt & Ratcliff, 1995).

 

Across-trial variability in nondecision time can account for condition-specific changes in the leading edge

Nondecision time variability (st) represents fluctuations in the time taken by processes unrelated to decision-making, such as sensory encoding or motor responses. This aspect of variability is crucial for explaining why some responses are unexpectedly slow or fast, independent of the decision-making process itself. It’s important to note that this parameter impacts both correct and error responses similarly, as it pertains to processes that are not directly related to the decision-making accuracy. Variability in nondecision time has been demonstrated to influence the spread of the leading edges of RT distributions across different conditions, such as in task-switching paradigms (Ging-Jehli & Ratcliff, 2020).

 

Take-home messages

  • The diffusion model predicts equal RTs for correct and error responses only when the only source of variability is the moment-to-moment fluctuation in the evidence accumulation process.

  • Variability in starting points (sz) can lead to scenarios where the mean RT for errors is less than that for correct responses.

  • Across-trial variability in drift rates (η) often results in mean RT for correct responses exceeding that for errors.

  • Combining sz and η variability helps to account for crossover interactions under different conditions. For example, fast errors might occur with high discriminability stimuli and slow errors with low discriminability stimuli.

  • The model fails to predict a crossover pattern where errors are slower than correct responses in high-accuracy conditions and faster in low-accuracy conditions solely with drift rate variability. For such patterns, especially in conflict tasks, dual-stage conflict DDMs are used. I will elaborate more on this in an upcoming blog post.

  • We want to interpret the variability parameters with caution, as their psychological interpretations are less established, and their recovery and identifiability are often poorer compared to the main model parameters. Some of these issues can be mitigated by fitting models within a Bayesian hierarchical framework. I will discuss this approach in more detail in an upcoming blog post.

 

Selected References

Brown, S. D., & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57(3), 153–178. https://doi.org/10.1016/j.cogpsych.2007.12.002

Donkin, C., Brown, S., Heathcote, A., & Wagenmakers, E.-J. (2011). Diffusion versus linear ballistic accumulation: Different models but the same conclusions about psychological processes? Psychonomic Bulletin & Review, 18(1), 61–69. https://doi.org/10.3758/s13423-010-0022-4

Forstmann, B. U., & Wagenmakers, E.-J. (2015). Model-Based Cognitive Neuroscience: A Conceptual Introduction. In B. U. Forstmann & E.-J. Wagenmakers (Eds.), An Introduction to Model-Based Cognitive Neuroscience (pp. 139–156). Springer. https://doi.org/10.1007/978-1-4939-2236-9_7

Ging-Jehli, N. R., & Ratcliff, R. (2020). Effects of aging in a task-switch paradigm with the diffusion decision model. Psychology and Aging, 35(6), 850–865. https://doi.org/10.1037/pag0000562

Ging-Jehli, N. R., Ratcliff, R., & Arnold, L. E. (2021). Improving neurocognitive testing using computational psychiatry—A systematic review for ADHD. Psychological Bulletin, 147(2), 169–231. https://doi.org/10.1037/bul0000319

Ratcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85(2), 59–108. https://doi.org/10.1037/0033-295X.85.2.59

Ratcliff, R., & Smith, P. L. (2004). A Comparison of Sequential Sampling Models for Two-Choice Reaction Time. Psychological Review, 111(2), 333–367. https://doi.org/10.1037/0033-295X.111.2.333

Ratcliff, R., Van Zandt, T., & McKoon, G. (1999). Connectionist and diffusion models of reaction time. Psychological Review, 106(2), 261–300. https://doi.org/10.1037/0033-295X.106.2.261

Smith, P. L., & Ratcliff, R. (2004). Psychology and neurobiology of simple decisions. Trends in Neurosciences, 27(3), 161–168. https://doi.org/10.1016/j.tins.2004.01.006

Van Zandt, T., & Ratcliff, R. (1995). Statistical mimicking of reaction time data: Single-process models, parameter variability, and mixtures. Psychonomic Bulletin & Review, 2(1), 20–54. https://doi.org/10.3758/BF03214411

 

 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page