In a previous post, I did a simple analysis to determine when you’re likely to make money while playing Acey Deucey. The upshot of that post is basically that there aren’t many good opportunities to bet, so when you get a chance, you have to bet big. The unanswered question from that post is “how much?” If you see a K-2 spread, you know that’s a good time to bet, but what fraction of your bankroll should you bet? This post will set out to answer that question.

First, some assumptions:

1. We’re playing repeated rounds of Acey Deucey, the rules of which are described in my previous post.
2. We have a starting bankroll that we cannot replenish.
3. We care about maximizing our long-term wealth.
4. We can play as many games as we want, to infinity.

Intuitively, we can see the trade-off between betting more now and saving to bet more in the future. If the expected value is positive, we must be obliged to bet something, so we can’t bet 0%. Likewise, we should want to bet more as the expected value increases. However, our bankroll is path-dependent: every dollar we lose now is a dollar we don’t have to bet in the future. Further, if our bankroll ever drops to 0, we ‘go bust’ and are permanently barred from playing again. This implies that we can only bet 100% if we have a 100% chance of profit, because even the smallest chance of going bust in unacceptable from the long-term perspective.

## The Kelly Criterion

These ideas are combined nicely into the Kelly Criterion, the canonical way of thinking about questions of this nature. I first read about this concept from a nice book on the history of quantitative finance, The Physics of Wall Street by James Owen Weatherall. Kelly was a physicist at Bell Labs, and in his spare time thought a lot about the theory behind optimal gambling.

The derivation of the Kelly Criterion is simple and instructive. Consider a coin toss bet, where the coin is weighted such that the probability of winning is $$p$$. The odds for this bet are $$b$$, such that we win $$b$$ dollars for every 1 dollar that we bet. If we lose the bet, we lose our entire bankroll. We have to decide what fraction $$f$$ of our bankroll to bet. If our goal is to maximize long-term wealth, we want to maximize the growth rate $$r$$, which in expectation is $$r = (1+fb)^{p} \cdot (1-f)^{1-p}.$$ This is a result of applying the expected value calculation across winning and losing the bet.

To maximize, we simply take a derivative and set it equal to zero, first taking the log for easy differentiation:

$$\log(r) = p \log(1+fb) + (1-p) \log(1-f).$$

Differentiating and setting equal to zero, we arrive at $$\frac{pb}{1+fb} + \frac{(1+p)}{1-f} = 0,$$ and solving for $$f$$ gives us

$$f = p + \frac{p-1}{b}.$$

Some examples are useful. A coin toss with a 60% chance of paying out at a 1:1 ratio has $$p = 0.6$$ and $$b = 1$$, meaning that you should bet $$0.6 + \frac{-0.4}{1.0} = 0.2$$, or 20% of your current bankroll. If, instead, the coin toss paid out at 1:2 odds, requiring a wager of 2 dollars to make 1 dollar, then the formula says you should bet $$0.6 + \frac{-0.4}{0.5} = -0.2$$. In this case, we should bet nothing, as is the case if and only if $$f <= 0$$.

## Application to Acey Deucey

In Acey Deucey, there are actually three possible outcomes for each bet: a win, a loss, and a sting (double loss). To handle this, we can follow the methodology in this Stack Exchange post, basically extending the derivation discussed earlier to include more terms in the expectation sum. This leads to us to maximize

$$p_{win} \log(1 + f b_{win}) + p_{loss} \log(1 - f b_{loss}) + p_{sting} \log(1-fb_{sting}).$$

From the rules of the game, we know that $$b_{win}=1$$, $$b_{loss}=1$$, and $$b_{sting}=2$$.

We also need to come up with values for the three probabilities. These probabilities obviously change throughout the course of the Acey Deucey game, as cards are pulled from the deck. So, let’s simply assume that we’re playing the first draw of the deck, so that there’s an equal probability of every remaining card. Let’s also assume that the two cards aren’t both aces, just to make things a tad simpler. Given these conditions, and given a spread between two cards of $$s$$ (e.g. a King and a 4 gives $$s= 9$$, a 10 and a 2 gives $$s=8$$) we can conclude that $$p_{win} = \frac{4(s-1)}{50}; p_{loss} = \frac{(44 - 4(s-1))}{50}; p_{sting} = \frac{6}{50}.$$ These are calculated simply by counting cards: there’s 50 cards left in the deck, 6 of which will cause a sting, $$4(s-1)$$ cards that will cause a win, and the remainder will cause a loss.

Putting it all together, we need to optimize

$$E = \frac{4(s-1)}{50} \log(1 + f) + \frac{44 - 4(s-1)}{50} \log(1 - f) + \frac{6}{50} \log(1-2f).$$

## Calculating results

Finally, we have all the pieces of the puzzle. For each spread between 8 (the minimum positive expected value spread) and 13 (the maximum spread in our assumptions, Ace-King), we need to calculate the optimal fraction of our bankroll to bet.

First, we define a function that generates the expected value equations based on the spread:

create_value_func <- function(spread) {
final_function <- function(x) {
y <- 6 / 50 * log(1 - 2 * x) + ((spread - 1) * 4) / 50 * log(1 + x) + (50 - 6 - (spread - 1) * 4) / 50 * log(1 - x)
return(-y)
}
return(final_function)
}


Note that we return the negative value of the equation, simply because our optimizing function stats::optim calculates the minimum.

Next, we set up all of the value functions, one for each spread:

all_value_functions <- map(8:13, create_value_func)


We then set up a function to take the value function and optimize it. We have to give the optimizing function an initial guess, and I found that some of these initial guesses led to errors. So I wrapped the optimizer in purr::possibly, and set it to return a 0 if it errored. I also set it to guess many numbers between 0.05 and 1, just to ensure that at least one of the guesses would converge.

optimizer <- function(value_func) {
# iterate so we don't break things
for (i in seq(.05, 1, .05)) {
possibly_optim <- possibly(optim, otherwise = list(par = 0))
result <- possibly_optim(par = i,
fn = value_func,
lower = c(0),
upper = c(1),
method = "L-BFGS-B")
if (result$par != 0) { break } } return(result$par)
}


With that done, all we need to do is apply the function with purrr::map.

all_optimal <- map_dbl(all_value_functions, optimizer)

8 0.00%
9 10.95%
10 20.00%
11 26.95%
12 32.00%
13 35.58%

First, note that even though there can be positive EV bets on an 8-spread (as documented in my previous post), in this set of assumptions there is none. That is, if you know literally nothing about the deck and you get an 8-spread, you should bet nothing.

Second, the smallest amount that you should bet is around 11% of your bankroll. I think this is quite a bit higher than I have seen people typically bet on 9-spreads. If you see a 2-Jack spread, it doesn’t feel that great, and I often see people bet around 1-2 dollars. But, if you’re showing up with one hundred dollars (maybe high, but not outrageous by my typical poker weekends), you should actually bet around 11 dollars, a ten- or five-fold difference!

I think this effect works the other way at the higher numbers. At a 13-spread (Ace-King), the common wisdom is to bet the entire pot, regardless of its contents. On extreme occasions, the pot can double a few times and end up over 100 dollars. In these scenarios, you should decidedly not bet the pot. You should only bet around 36% of your current bankroll.

Posted on:
November 22, 2021
Length: