I am trying to learn a likelihood network using BayesFlow. The simulation budget is 800 training datasets. I followed the steps in JANA paper for the joint learning of posterior and likelihood, I have a few questions, hope we can discuss and figure out. Thanks!
I tried to directly learn likelihood network P(raw data|theta) using invertible network, the results are pretty bad, possibly due to the limited training data.
I wonder if have sufficient training data, I believe it is possible to learn P(raw data|theta). But I found the outputs from function [amortizer.sample_data(validation_sims, n_samples=1000)] is 3-dimensional, that is, (n_datasest, n_posterior samples, n_features in time series data). My expectation is probabilistic prediction (n_datasest, n_posterior samples, n_time steps, n_features in time series data)*, which is 4-dimensional. Does anyone give some ideas on how to output probabilistic time series data using BayesFlow? Maybe I missed some points/functions?
Alternatively, we can learn the likelihood network for compressed data, e.g., P(compressed data|theta), which is found much easier to learn. I used to compress time series data outside of BayeFlow, such as applying signal processing to data, then fed the compressed data into BayesFlow. Is it possible to directly learn P(compressed data|theta) in BayesFlow? When I define the AmortizedLikelihood, I did not see any arguments related to summary net.
4, Besides BayeFlow, any suggestions for likelihood network estimation using other methods, which can probabilistically predict time-series data?
Thank for the attention, guys. This forum is incredibly meaningful and supportive!
welcome to the forum! I do not know about the complexity of the approximation task, but 800 training data sets sounds like a very small training budget. Is it possible to increase the amount of data sets?
Regarding your time series questions, maybe @marvinschmitt or @KLDivergence as authors of the JANA paper have some suggestions?
Hi Lasse,
Thanks for your answer! the 800 training data seems too small. But, for many engineering problems, especially in my area of structutal engieering, running the simulation once would be very expensive, leading to relatively small amount of training data. I even increase the datasets and applied to multivariate time series data, such as 1500x100x4, the likelihood network P(raw data|theta) still does not perform well and can not output 4-dimensional prediction (1500, n_posterior samples, 100, 4).
Jice, the code for learning a surrogate for time series is here. This does not use a vanilla likelihood network, but a one with recurrent memory.
However, we used online learning and it seems pretty much infeasible to learn a proper surrogate from small data using that architecture.
I believe two promising approaches would be 1) learning the likelihood of informative summary statistics; 2) learning a variational recurrent surrogate with some limiting assumptions (e.g., Gaussian p(data_t | parameters).
Thank you so much! I will take a look at the example.
You are right! Using summary statistics to learn likelihood is much easier. I tried to learn a likelihood network by the frequency-domain data rather than time-domain data, the results are good enough.
Yes, tuning the network should lead to even better results. Generally speaking, smaller data sets require more regularization for the networks to generalize well.
I tried to load the model in the example you referred, I guess the trained model is saved in the checkpoints, but I am not sure which file in the checkpoints is the right model and how to load the model? Did you save weights of joint_amortizer?
I tried to load the model in the example you referred, I guess the trained model is saved in the checkpoints, but I am not sure which file in the checkpoints is the right model and how to load the model? Did you save weights of joint_amortizer?
The Jupyter notebook covid19_joint_memory_MMD.ipynb [1] contains the training and evaluation code. It loads the checkpoint from checkpoints/joint_mmd. The repository contains the saved checkpoint, so it should work from what I see (please correct me if I’m wrong @KLDivergence).
@Jice did you download the entire folder and try to load the checkpoint that way? After initializing the trainer, you should get a message telling you that a checkpoint with 100 epochs has been loaded.