Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> this also implies that p(utility(money(optimal strategy)) > utility(money(other strategy))) --> 1

That's true (at least as long as the other strategy also bets a constant proportion of wealth each turn). But it doesn't mean that Kelly is optimal. It could be that in the (increasingly unlikely) cases when the other strategy beats Kelly, the utility produced by the other strategy is much greater than that produced by Kelly. Then the other strategy could still be better overall.

In other words, even though we have

    p(utility(money(Kelly strategy)) > utility(money(other strategy))) --> 1
we also have

    Expectation[utility(money(Kelly strategy)) - utility(money(other strategy))] < 0
Here's a simplified example of what's going on: consider the bet where you get a penny with probability 1-1/n, and otherwise you lose $(2^n). I think this bet gets worse and worse larger n gets, but the probability of having higher utility if you take the bet tends to 1.


>That's true (at least as long as the other strategy also bets a constant proportion of wealth each turn).

Does it require that assumption? I don't think it even requires identically distributed returns on each component bet. It just needs money_(i+1) = f(money_i, x_i) to have strictly positive support right? Then you can just push it into log space and apply the law of large numbers, telling you to maximize the expected log of f(money_i, x_i) wrt x_i. Nothing about fractions or constant fractions shows up in that derivation.

The usual statement of the Kelly criterion is about fractions, but the more general question is whether maximizing expected log is (eventually) optimal, which seems to only require being able to apply the law of large numbers in log space.


Imagine we started with $100 and were betting on a fair coin with fair odds. There's no edge so Kelly says to bet $0, and hence the Kelly strategy stays at $100 forever. You can give yourself a very high chance of beating the Kelly strategy if you use a Martingale strategy. Bet $0.01. If you win then you have $100.01 and you should not bet again. If you lose then bet $0.02 next time, and then $0.04 and then $0.08 doubling each time until you win. When you win you will go up to $100.01, which beats the Kelly strategy. So Martingale beats Kelly unless you go bankrupt by losing too many times in a row before getting a win, which only happens with a very small probability. So there are strategies that have a high probability of beating the Kelly criterion.

For simplicity I gave the above example in the degenerate case where the edge is zero. But I think the analogous strategy works in all cases. If you aim for just a penny over the Kelly strategy then you have a high probability of success.


Fair point. And while it's too late to edit my post, I think I found the flaw in the math.

Just because X converges in probability to x, f(X) doesn't necessarily converge in probability to f(x). If that were so, then logarithmic utility would be sound, but it isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: