I’d like to compare two models to determine which fits my empirical data best.
Model 1: A standard drift diffusion model (DDM) with four parameters.
Model 2: An extended DDM in which some parameters fluctuate across trials according to an AR(1) process. In this model, I estimate the trial-specific fluctuations (“local parameters”), the AR(1) process parameters (“hyperparameters”), and the parameters that remain constant across trials (“shared parameters”).
Both models produce a response and reaction time for each trial.
I’m using the legacy version of BayesFlow, and in the model comparison tutorial, the AmortizedModelComparison function is used. However, my two models differ in both their configurators and summary networks due to the hierarchical structure of the extended model.
Could you please provide guidance on how to perform model comparison in this case?
This is an interesting question. Here is an idea. If the two models assume orthogonal symmetries (e.g., IID vs. Markovian), you can create a composite summary network that includes both a permutation-invariant subnetwork and a time-aware subnetwork that process the same data. The outputs of these networks are then concatenated and passed on to the classifier head. That way, your architecture can learn both time-invariant and time-dependent features.
This is, of course, you want to compare the models based on posterior model probabilities (PMPs). You can also compare the models based on predictive performance, which you can already do using the two parameter estimation networks with different configs.