Adriana F. Chávez De la Peña

Research Projects

A custom JAGS module for the Bayesian implementation of the CDDM
We developed open source software (i.e., a custom module for the probabilistic programming language JAGS) to support the Bayesian implementation of the CDDM. This work was published at Computational Brain & Behavior, in a paper where we:
  • Introduce our custom JAGS module with directions on where and how to install it
  • Validate our implementation through a full parameter recovery simulation study
  • Showcase the benefits of working with the CDDM under a Bayesian framework through a sample application using publicly available data.
A comparison of sampling algorithms for the CDDM
I developed and compared the performance of different sampling algorithms to generate choice and response time data from the CDDM in terms of their accuracy and computational efficiency. The best algorithm is soon to be implemented as part of our custom JAGS module for the Bayesian implementation of the CDDM.
An EZ Bayesian hierarchical drift diffusion model for response time and accuracy
We developed a Bayesian hierarchical drift-diffusion model (DDM) with efficient sampling methods that can be implemented in any probabilistic programming language. Our model uses binomial and normal distributions to model the sampling distributions of key EZ-DDM summary statistics, enabling versatile extensions to hierarchical models with latent variable and metaregression structures.
Robust Bayesian hypothesis testing with the hierarchical EZ-DDM
We developed a Bayesian hierarchical drift-diffusion model (DDM) with efficient sampling methods that can be implemented in any probabilistic programming language. Our model uses binomial and normal distributions to model the sampling distributions of key EZ-DDM summary statistics, enabling versatile extensions to hierarchical models with latent variable and metaregression structures.
The HDI+ ROPE decision rule is logically incoherent but we can fix it.
We show that the Bayesian HDI + ROPE decision rule is logically incoherent: because HDIs are not transformation invariant, the accept/reject call changes with arbitrary model reparameterizations. The mistake comes from treating probability density as probability. We illustrate the failure with theory and examples, recommend alternative Bayesian testing procedures, and offer a simple fix based on quantile intervals.
On Cronbach's merger: Why experiments may not be suitable for measuring individual differences
We investigate Cronbach's 1956 call to merge differential and experimental psychology by using true experiments to study individual differences. Through simulation, we show that experimentally-defined contrasts are too noise-prone to be useful at the individual level, making it difficult to uncover even simple latent structures. We introduce a new signal-to-noise ratio measure of task goodness that is invariant across experiments. Our findings reveal that latent cluster or factor structures were only recoverable in the largest experiments (hundreds of people, hundreds of trials per condition), and only for the simplest structures. These results serve as a warning: while Cronbach's merger is theoretically appealing, it faces substantial practical hurdles.