I wrote a while back about a toy case of the Bayesian surprise problem: what does Bayes Theorem tell you to believe when you get really surprising data. The one-dimensional case is a nice math-stat problem, if you like that sort of thing, but maybe you’d rather have the calculations done for you.
Here’s an app
The mathematical setup is that you have a prior distribution for a location parameter centered at zero, and you see a data point that’s a long way from zero. If and are the prior and likelihood, the posterior is proportional to .
When the prior is heavy-tailed and the data distribution isn’t, you’re willing to believe can be weird, so a very large means your posterior for will be near . When the data distribution is heavy-tailed and the prior isn’t, you’re willing to believe can be a long way from , but not that can be a long way from zero, so the prior ends up pretty much like the posterior – you ‘reject’ the data.
The details, though, depend on how heavy-tailed things are, and the app lets you play around with a range of possibilities. Laplace–Laplace and – and –Normal might be interesting.
The code is here