Hi all,
I am currently running a drift diffusion model focused on post-decisional evidence accumulation and am having trouble reconciling different outcomes between BF versions 1 and 2.
I understand that these processes aren’t deterministic and there will always be differences but the rank ECDF almost always go over the 95% confidence bound across different post samples when doing it on BF v2 vs BF v1. Furthermore, they are quite volatile for parameters boundary separation/an and non-decision time/ter in BF v2.
I’m wondering what the reasons for this could be? It’s also worth noting that recovery is much stronger in BF v2 compared to BF v1 across all parameters, but especially for v_ratio/efficiency (~0.5 vs ~0.7). My hunch is that I didn’t do a good job translating the configurator from BF v1 to the adapter in BF v2. Another possible reason is that the former is run on coupling flow while the other is on matching flow. Although I’m not knowledgeable enough to understand whether this would make a big difference.
BF v1
from tensorflow.keras.utils import to_categorical
# A configurator extracts the results of the model to a format that the neural network would like
def configurator(forward_dict):
# Prepare placeholder dict
out_dict = {}
# Extract simulated response times
data = forward_dict["sim_data"]
# (B, N, 1)
context = np.array(forward_dict["sim_batchable_context"])[..., None].astype(np.float32)
resp = ((data[:, :, 1] + 1.0) * 0.5).astype(np.float32)
stim = ((data[:, :, 3] + 1.0) * 0.5).astype(np.float32)
rt_resp = data[:, :, 0:1].astype(np.float32)
cat_resp = to_categorical(resp, num_classes=2)
cat_acc = to_categorical(data[:, :, 2], num_classes=2)
cat_stim = to_categorical(stim, num_classes=2)
ord_conf = data[:, :, 4:5].astype(np.float32)
post_rt = data[:, :, 5:6].astype(np.float32)
out_dict["summary_conditions"] = np.concatenate([rt_resp, cat_resp, cat_acc, cat_stim, ord_conf, post_rt, context], axis=-1)
vec_num_obs = forward_dict["sim_non_batchable_context"] * np.ones((data.shape[0], 1))
out_dict["direct_conditions"] = np.sqrt(vec_num_obs).astype(np.float32)
out_dict["parameters"] = forward_dict["prior_draws"].astype(np.float32)
return out_dict
BF v2
adapter = (
bf.Adapter()
.broadcast("num_obs", to = "rt")
.as_set(["rt", "resp", "acc", "stim", "conf_resp", "post_rt"])
.sqrt("num_obs")
.shift(["resp", "stim"], by = 1.0)
.scale(["resp", "stim"], by = 0.5)
# .one_hot(["resp", "acc", "stim"], num_classes = 2)
.concatenate(["v", "a", "ter", "v_bias", "a_bias", "m_bias", "v_ratio"], into = "inference_variables")
.concatenate(["rt", "resp", "acc", "stim", "conf_resp", "post_rt"], into = "summary_variables")
.rename("num_obs", "inference_conditions")
.convert_dtype("float64", "float32"))
# If one_hot is ran, the following code 'adapter(sim_draws)' results in - IndexError: arrays used as indices must be of integer (or boolean) type
I’ve done my best to make sure the rest of the code functions as similarly as possible.