Hi,
I used the trained model to generate posterior sample, however, I found that some posterior samples were out of domain, such as samples falling outside upper or lower bounds. Does BaysFlow provide some contraints to train the model or generate samples?
I guess that the reason of my problem is possibly latent distribution? The default latent distribution is multi-variate Gaussian, should I specify other latent distribution to avoid out-of-domain sample?
Thanks all!

We have had the same problem, which is probably inevitable with plain vanilla implementations: unless the network is perfectly trained, there is no guarantee that reverse pass through the network leads to parameters that fall within the prior domain: they are outcomes of a complex neural net and may violate any such constraints even if the training has significantly reduced such violations. This in fact could be problematic for posteriors that may violate physical laws (e.g. negative mass) and thus blow up the simulator when fed back into simulator for various use cases. Our simple hack so far is to put hard constraints on outputs of posterior to remain within prior feasible range (which is clear cut when using hard constraints, e.g. for uniform priors; less clear for Gaussian unless you are truncating). You can think of that hack as making your simulator more robust: any input parameter out of bound is adjusted to the corresponding bound so the simulator is robust. We have implemented that constraint as a function that operates on Bayesflow generated posterior sample. I would be interested in any better solutions others may have for this issue.

Hi guys, this is not uncommon with complex problems / little data and we recommend learning in unbounded space (e.g., log-transform the bounded parameters, etc.) and then back-transforming the parameters where interpretation in the original scale is needed.

Learning in transformed space is ubiquitous across libraries, e.g., pyABC:

Thanks for the advise. I am also thinking adding underlying physics to BayesFlow, e.g., combine PINN with BayesFlow. Currently, I just simply removed the sample out of domain based on the knowledge of parameters.