Hi, thanks for the replies! So the use case here is a little bit weird. I am actually calling a function from a larger C++ analysis framework (called MaCh3) that I’ve setup python bindings to so it’s a little hard for me to write down how I’m doing things in great detail without sending links to another C++ repo. Below is a slightly simplified version of how I am setting things up on the python side of things though.
import _pyMaCh3Tutorial as m3
class CustomObject:
""" A simple class to hold all the necessary objects from MaCh3
and a place to define the prior and simulation functions
"""
def __init__(self, covariance_config):
self.cov = m3.covariance.Covariance(covariance_config)
def prior_throw(self):
# Random through to propose new parameter values
# Throw_par_prop is a python binding to a C++ function
self.cov.throw_par_prop()
# then actually get the proposed steps as a np array
# again, get_proposal_array is a python binding to a C++ function
return np.asarray(self.cov.get_proposal_array())
Then I am setting up my prior object like this:
custom_obj = CustomObject(["Inputs/ParametersForMyModel.yaml"])
prior = bf.simulation.Prior(prior_fun=custom_obj.prior_throw)
prior(batch_size=10)
This will give me 10 identical draws from the prior. Whereas the following code will give be 10 different draws from the prior.
for _ in range(0,10):
print(custom_object.prior_throw())
I am a little unsure of why the two cases should be giving me different results. Looking to the prior class in the simlulator it should be doing something extremely similar to my simple for loop. I will continue some debugging on the C++ side of the functions to understand where things are going wrong but I am a little confused. For now I have been able to work around by making simulations offline and putting them into a dictionary but ideally I would like to use the online training.
Let me know if you can see that I am doing anything immediately wrong though, as I say I am new to BayesFlow (and not a fantastic python programmer) so I do not rule out that I’ve done something silly!
If it helps for more information each simulation should have a unique throw from the prior as each of my parameters in non_batchable.
In terms of trying on the dev branch, I can try this to see if this fixes my problems with the prior function. However, I am actually using a fork at the minute (here) which has some functionality which I need for my use case. Basically this fork has the functionality to apply individual weights to your training data and take this into account in your loss function in the inference network (see here). We should have a separate discussion about whether it would be possible to include this feature into develop as it is fairly common use case in Particle Physics. Myself and the owner of that fork are both based at Imperial College London and in Particle Physics, hence me piggy-backing off of his fork. As I say though this is probably best for a separate discussion to my current problem.
Thanks again for the replies and I’m happy to talk more about my use case if it’s helpful 