I’m currently training an NPE model on time series data using the TimeSeriesNetwork as my summary network.
In addition to the time series, I have a second, low-dimensional data modality. What I would like to do is:
First, encode the time series using the TimeSeriesNetwork, producing a latent summary s.
Then concatenate this summary with the additional low-dimensional modality.
Pass the concatenated representation through a second (simple) summary network, e.g., an MLP.
Feed the final summary into the normalizing flow(CouplingFlow) .
My motivation is that I would like to retain BayesFlow’s joint optimization of summary and flow networks. The additional modality may sometimes provide useful information for inferring the target parameters, but in other cases it might be uninformative or even detrimental. Ideally, the summary network could learn how much to rely on it.
Is this type of multi-stage summary architecture supported in BayesFlow? If so, what would be the recommended way to implement it?
Thank you very much for your great work on BayesFlow!
Absolutely, check out this short tutorial notebook, which shows how to build such pipelines using the FusioNetwork interface:
If I am misunderstanding the problem and this doesn’t work out of the box, it is possible to craft a minimal custom summary net that reuses existing components.