I would like to see a head to head comparison of the uncertainties produced by Variational Inference and Deep Evidential Regression for a moderately complex neural network.
This could be anything related to regression which is non trivial.
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!
Currently we assume that all regression cases will want to use a y~Normal(mu, sigma) model. This is definitely not always true and there may be many other distributions which could be useful for a deep learning setting. For instance.
Poisson
Binomial
NegativeBinomial
StudentT (This might be a bit flaky since we do end up with a model evidence following a StudentT either way)