Getting Error using ModelComparisonSimulator in BF 2.0.1

Hey,

I am trying to use the new BayesFlow version (2.0.1) and workflow to make individual-level comparisons between a set of some cognitive process models. I oriented myself on the “Simple Model Comparison - One Sample T-Test” - Example, which runs without problems so far.

However, when I use models with different sets of parameters, I run into an error. Here is one minimal working example:

def prior_1():
    return dict(w=rng.uniform())

def prior_2():
    return dict(c=rng.uniform())


def model_1(w):
    return dict(x = w)


def model_2(c):
    return dict(x = c)

simulator_1 = bf.make_simulator([prior_1,model_1])
simulator_2 = bf.make_simulator([prior_2,model_2])

simulator = bf.simulators.ModelComparisonSimulator(simulators=[simulator_1, simulator_2])
simulator.sample(10)

The error I am getting is: dictionary key mismatch; expected key(s): ['w', 'x'], got key(s): ['c', 'x'], missing key(s): ['w'], extra key(s): ['c']

Maybe I have overlooked something in the documentation, but any help would be appreciated.

1 Like

Hey,
welcome to the forum and thanks for reaching out. I can reproduce the error with your example, it looks like a bug in the way the outputs of the simulators are combined. The current implementation seems not to be able to deal with different parameter names from prior functions. I’m not sure if this is intended behavior, but even then it should produce a meaningful warning/error. I have opened an issue, and hope we can fix this soon. For now, you could resort to producing the same dicts in the two priors, and just setting the ones that are not needed to zero (or any other arbitrary value).

1 Like