> It is the same way with traditional hypothesis testing. You take two models and compare their likehood.
With a Bayes factor you compare the marginal likelihood. You have to account for the weight of the parameters according to the priors. With a likelihood ratio, you pick the best parameters and take the ratio of those likelihoods.
This means a model used in a Bayes factor must be able to make predictions that follow probability axioms. Models in likelihood ratios don’t have this restriction.
I agree likelihood ratios and Bayes factors are similar. They’re also different.
> With a Bayes factor you compare the marginal likelihood. You have to account for the weight of the parameters according to the priors. With a likelihood ratio, you pick the best parameters and take the ratio of those likelihoods.
Yeah, that's the difference that I mentioned. And seems very different from whatever "it put the probability of 0% at 1 and everything else at 0" is supposed to refer to.
> This means a model used in a Bayes factor must be able to make predictions that follow probability axioms. Models in likelihood ratios don’t have this restriction.
Models in likehood ratios absolutely have to follow probability axioms, otherwise it would make no sense to apply probability axioms to study them.
With a Bayes factor you compare the marginal likelihood. You have to account for the weight of the parameters according to the priors. With a likelihood ratio, you pick the best parameters and take the ratio of those likelihoods.
This means a model used in a Bayes factor must be able to make predictions that follow probability axioms. Models in likelihood ratios don’t have this restriction.
I agree likelihood ratios and Bayes factors are similar. They’re also different.