top of page

How much should I bet?

I really like having a home. It's great. My family doesn't have to live outside, and I have a place to keep all my whiskey. We have heat and water and internet. To use the economic term – my first home has a lot of utility.

We've thought about buying a second home - maybe a small house out in Litchfield County, Connecticut; I really like it out there. It would be great to have a spot to get out of the city. I'm sure we'd use it for a lot of weekends. But it would be less important than our first home; it would have less utility.

I've never thought about a third home, but something in Florida might be nice. I don't think I'd get down there as much, I'd already be busy between Brooklyn and Connecticut. A fourth home? I usually get to Europe once a year, I guess a pied-a-terre in Paris would have some purpose. What's next? A ski house in Vail? Honestly, it sounds like more trouble than it's worth. My utility for the fifth home would be minimal.

The same is true for plain cash. I'm really happy to have that first $1000 in my bank account - I use that to buy food and other necessities. The second $100 is really important too - sometimes I need at least $200 in cash for large bills. If I have $10,000 in the bank, the last $100 is not nearly as important - I have nothing against money, but if it disappeared the impact would be a significantly less.

It is both obvious and counter-intuitive. The more you have of something, the less value you have for each individual thing. This causes problems for gamblers - or money manager. Their usual goal is to maximize expected returns (possibly with a fixed level of risk). This is not consistent with the diminishing marginal utility we described above.

Diminishing marginal utility has been known for hundreds of years; Daniel Bernoulli published a clear explanation as early as 1738. Bernoulli predicted that utility had a logarithmic nature.

The blue one goes up faster than the orange one...

For those who don't know what a logarithm is, don't worry. It only tells you that if you owned fifty houses, they would provide only four times as much utility as owning one. Linear utility would mean they are fifty times as useful. Now, every person is going to have a different utility function for every asset. I don't eat octopus; I have zero utility for any quantity of the stuff. But in general, logarithmic utility looks more reasonable, makes more sense.

We can apply this back to our gambler or money manager. Rather than trying to maximize their gains, they should be trying to maximize the logarithm of their assets. Fortunately, we have some help in this: in the long run, the Kelly Criterion will outperform any different strategy in maximizing the average logarithm of returns. Here's how it works:

Consider that you have a total bankroll of $1000; this is all the money you have in the world. Somebody offers to play a game with you. He will flip a coin; if it comes up heads, you win $1.20, if it comes up tails, you lose $1.00. You can bet any amount (i.e. bet $5 to win $6 or $1000 to win $1200). You can play the game repeatedly, many times. How much should you bet?

You have "an edge" in this game; clearly you want to bet something. Let's make the assumption that your utility function is logarithmic; you want to maximize your expected utility in the long run. The Kelly Criterion tells you that your bet size should be a percentage of your bankroll equal to:

In this game, your expected net winnings is $0.10; if you flip the coin twice, you expect to win $0.20 total = $0.10 per flip. Your net winnings if you win are $1.20. So:

If you lost, your next bet would be 8.33% of your new bankroll of $916.67, or $76.39. If you win, your next bet would be $90.28 of your new $1083.30 bankroll. There are two simple extensions of this that bear mentioning. If a game has no advantage, expected net winnings would be zero; the Kelly Criterion tells you to bet nothing. If the game has a negative advantage, it tells you to bet a negative amount; this can be interpreted as betting on the other side of the game (if your opponent will let you - casinos are usually against this policy).

Real people use the Kelly Criterion in real situations. The first to do so was probably mathematician Edward Thorp, who took it to the blackjack tables. In his seminal book, Beat the Dealer, he was the first to mathematically prove that card counting could beat the house. In the investing world, Warren Buffett, Bill Gross and Jim Simons have all referred to Kelly-type principles in their investing.

I'd be remiss not to return to Ed Thorp for a minute - he was a fascinating guy. After "solving" blackjack, he created a proto-Apple watch that allowed him to predict the roulette wheel. He later founded one of the first hedge funds and became fantastically wealthy. In these endeavors, he worked closely with Claude Shannon - equally fascinating. Shannon is one of very few who are considered to have personally created a branch of mathematics; in his case, Information Theory. Thorp and Shannon were two of a larger, loose group of scientists, working at the intersection of mathematics, game theory and economics in the period of roughly 1940-1975. John Nash and John von Neumann were in the same orbit.

Among some of this group, there was a surprisingly virulent opposition to the Kelly Criterion. The opposition was centered mostly among the economists, especially Paul Samuelson. Samuelson is less known today than Keynes, Hayek and Friedman, but was not exactly a marginal figure. He won a Nobel Prize and his New York Times obituary referred to him as the "foremost academic economist of the 20th century." Samuelson's hatred of the Criterion matched his colorful personality. He published a peer-reviewed article "disproving" it. It consisted entirely of words with one syllable, presumably to demonstrate its obvious untruth.

The problem with strident opposition is that it usually has no answer to the obvious question: What should I do instead? Samuelson does not agree with the assumption of logarithmic utility; but it's not clear what he would use instead. Even among its proponents, most agree that systematically following the Kelly Criterion entails more risk than people are willing to take; it only works in the long run, by which point we are all dead. Rather than throw the baby out with the proverbial bathwater, these hesitant proponents have created risk-modified versions that still attempt to maximize realistic utility functions.

And unlike Samuelson, they still provide an answer to our real question: How much should I bet?

 

I just want to include a quick post-script here - I've been very unfair to Paul Samuelson. I love Paul Samuelson. He basically built my alma mater's economics department by himself. His greatness in every area of his research, except opposition to the Kelly Criterion, is nearly without question.

I'm not doing any of the scientists I mention proper justice in this article. If you want to know more I'd recommend this excellent recent book that discusses their exploits. Unfortunately, it also portrays Samuelson as the bad guy...

© 2016-2020, LobbySeven, LLC. All rights reserved

bottom of page