Hi,
I have applied the BayesFlow framework to my area, and the performance has been fairly good so far. The mechanism of BayesFlow relies on the use of normalizing flow (NF). Here, I would like to discuss some aspects of NF, and I welcome any comments, suggestions, and papers.
A crucial limitation of NF is the substantial memory requirement, which arises from maintaining the dimension of the latent space equal to that of the input space. I am seeking advice on addressing the curse of dimensionality in NFs. One possible solution is to apply dimensionality reduction before using NFs, such as using an encoder network.
I don’t have experience with applications that perform dimensionality reduction in parameter space, as the typical mode of operation is one where all model parameters are of scientific interest and cannot / should not be reducible. However, you can always pre-train an autoencoder that further (conditionally) compress the parameters and then use the latter to train the NF. Is that what you have in mind?
Hi Stefan
Thank you for suggesting the paper. The methods presented are advanced and effectively remove constraints in traditional normalizing flows (NFs), enhancing their expressivity while still maintaining the same dimensions for the input and latent spaces.
I have experimented with applying dimensionality reduction techniques as a preprocessing step before training the NF model, and it works well. Additionally, combining variational autoencoders (VAEs) with NFs has shown promising results. However, I am still curious if there is a solution within the NF framework itself that addresses this issue without relying on dimensionality reduction techniques
Normalizing Flows are inherently not suited well to problems where the data lie on a low-dimensional manifold [1]. External dimensionality reduction seems to be your best bet, currently. I would recommend following the general approach of [2] which is to use a deterministic encoder-decoder architecture and apply your generative model on the latent space of the encoder. Diffusion and Flow Matching are particularly well-suited due to their free-form Jacobian, but you can also try [3] if you want to stick to NFs.
Thanks for the comments. You are right. NF is not a good option for high-dimensional problems. The straightforward way is to apply the dimension reduction externally.
Do you have any other suggestions for high-dimensional inverse problems except using NF?
Best,
BayesFlow implementation (rectified flow) in bayesflow.experimental.rectifiers.
The pre-alpha version of the new BayesFlow 2.0 features Flow Matching as a first-class citizen and provides the simple interface bf.networks.FlowMatching()
BayesFlow implementation will come in BayesFlow 2.0 as well (not yet implemented in pre-alpha), with an interface like bf.networks.ConsistencyModel()
If you’re interested in preliminary code for consistency models, let me know and I’ll share the development repo in the “old” BayesFlow API with you.