You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current exploration objective used in the paper is a sum of expected reductions in entropy of the parameters of the dynamic, that is, each reduction is the difference between the entropy at the current time step and the entropy at the next time step, averaging over all possible next states.
I think the current objective could be simplified further by switching the next state and parameters as described in Houlsby et al., 2011 and Hernandez-Lobato and Adams, 2015 (see the active learning experiment).
This equivalent objective is easier for regression models (e.g. neural network regression like you are using here) because the term inside the expectation is now the entropy of the likelihood model, which is constant for the regression case. The difficult term left is the entropy of the predictive distribution, i.e. in the Gaussian prediction case, maximising this is equivalent to finding actions that will result in highest predictive variance. This can be computed for BNNs using Gaussian approximation to the predictive distribution or by Monte Carlo.
Would this change be easily incorporated into the current code?
The text was updated successfully, but these errors were encountered:
BALD could be an interesting way to calculate surprise, however, it seems that we then rely on having accurate dynamics uncertainty estimates, whereas now we model the adaption of the model to the environment itself. It should be quite easy to incorporate, definitely worth investigating!
[In discussion with Jose Miguel Hernandez-Lobato @jmhernandezlobato and Daniel Hernandez-Lobato @danielhernandezlobato]
The current exploration objective used in the paper is a sum of expected reductions in entropy of the parameters of the dynamic, that is, each reduction is the difference between the entropy at the current time step and the entropy at the next time step, averaging over all possible next states.
I think the current objective could be simplified further by switching the next state and parameters as described in Houlsby et al., 2011 and Hernandez-Lobato and Adams, 2015 (see the active learning experiment).
This equivalent objective is easier for regression models (e.g. neural network regression like you are using here) because the term inside the expectation is now the entropy of the likelihood model, which is constant for the regression case. The difficult term left is the entropy of the predictive distribution, i.e. in the Gaussian prediction case, maximising this is equivalent to finding actions that will result in highest predictive variance. This can be computed for BNNs using Gaussian approximation to the predictive distribution or by Monte Carlo.
Would this change be easily incorporated into the current code?
The text was updated successfully, but these errors were encountered: