A beta distribution has an approximate normal shape if its parameters are large, and so you could use normal approximations to compute beta inequalities. The corresponding normal inequalities can be computed in closed form.

This works surprisingly well. Even when the beta parameters are small and the normal approximation is a bad fit, the corresponding inequality approximation is pretty good.

For more details, see the tech report Fast approximation of beta inequalities.

**Related post**: Beta inequalities in R

I noticed in the paper the comment

> Perhaps one day Bayesian trials will be designed on Bayesian principles, but for now this is rarely done.

Are there any exemplar trials of Bayesian design, or texts on Bayesian clinical trials?

gwern: You can go to Amazon and search on “bayesian clinical trials” and find a few books.

But all Bayesian clinical trials that I know of are designed according to their frequentist operating characteristics. Maybe they’re designed on Bayesian principles, but then they’re obligated to demonstrate good frequentist characteristics too. And typically the Bayesian parameters are tweaked to achieve desirable frequentist characteristics.

Oh, I see what you mean. So how do these tweaks or design choices make the Bayesian trials in practice worse than they could be? Are the differences minor and cosmetic?

I don’t think the Bayesian trials are necessarily much worse off (by Bayesian criteria) for having to satisfy frequentist criteria, but

it’s a lot of workto make this happen.There’s also a temptation to think of trial parameters as arbitrary knobs to turn in a design. The prior, for example, can lose its meaning as a representation of prior belief and become just another tuning device. “So if I change my prior belief (!) to this, I get better operating characteristics.” This is neither Subjective Bayes nor Objective Bayes. More like Machiavellian Bayes. :) Once the design parameters lose their Bayesian interpretation, they also lose their Bayesian justification.