Feb 16, 2014

This is Jeopardy.

This past week, one of my colleagues flew out to do a taping for Jeopardy. He's a smart guy so I think he'll do well, and I'm looking forward to watching him when his show airs in a few months.

In the lead-up to his time on the show, we refrained from testing him with trivia, but we did get into some serious debates about wagering strategy. One of the more particular (and specific) scenarios we spent some time on had to do with the right wager to make when the last clue of Double Jeopardy is a Daily Double.

The root of the discussion is what I'm calling the Stegner Conjecture, named for the colleague who was adamant about making a big wager to put the game away, as long as you saw the category being neutral or better. His perspective is that you know more about the Daily Double category than you do the Final Jeopardy question, so you should use that to your advantage to put that game away.

While I understood his logic, I wasn't sure I would necessarily bet by his rules if I was ever placed in that spot. But... that opinion may have changed as I worked through the problem.

So first, let’s establish the scenario. For the sake of simplicity, we focused on the scenario where there will be just two players in the Final Jeopardy round. In this scenario, the two players who will compete in Final Jeopardy are separated by a small amount of money, and the player in the lead gets a Daily Double for the final clue of Double Jeopardy. Assume that this player (P1) has $10,200 and the second player (P2) has $10,000. Now, there are three basic betting scenarios:
  1. Bet big to put the game away (The Stegner Conjecture) – we’ll fix this amount at $10,000; winning would guarantee that Player 1 will win the match, as they will have more than double the score of P2
  2. Bet very small – we’ll fix this amount at $0; this means the game is determined by the contestants’ bets and responses in Final Jeopardy
  3. Wager an amount in the middle – we’ll fix this amount at $5,000; P1 will have $15,2000 or $5,200 for Final Jeopardy
Now, we need to look at the outcome of Final Jeopardy in each of the above betting scenarios. We can break down the outcome of Final Jeopardy by looking at the probabilities for the different response scenarios, which show the final responses by score position for Final Jeopardy between two contestants. Looking at the past 10 years, in games ending with two players in Final Jeopardy:
  • Right Right: 34%
  • Right Wrong: 27%
  • Wrong Wrong: 25%
  • Wrong Right: 14%
What this tells us--assuming we are comfortable using historical probabilities to predict future outcomes--is that P1 will have a 61% chance (34% + 27%) of being right in Final Jeopardy, and P2 will have a 48% chance (34% + 14%) of being correct. P1 will be wrong when P2 is right just 14% of the time.

For what it's worth, this degree of variation from an even distribution (25% - 25% - 25% - 25%) does pass a chi-square test for significance (p < 0.05). That said, I also modeled the outcomes using a uniform distribution for comparison purposes.

Now, for each betting scenario, I determined the Final Jeopardy betting strategy that would give each contestant the best chance of winning. In some situations, the player with the lower score's bet would be irrelevant, as there would be no way for the leading player betting rationally to lose.


Based on this analysis, P1 has the following chance of winning for each betting scenario. Note that the first number assumes an even distribution, the second number uses the historical distribution.

Stegner Conjecture
  • Right answer on Daily Double: 100% / 100%
  • Wrong answer on Daily Double: 0% / 0%
Medium Wager
  • Right answer on Daily Double: 75% / 86%
  • Wrong answer on Daily Double: 25% / 14%
Low/no Wager: 50% / 61%


So now, the only task left to do is to assign a probability for P1 answering the Daily Double correctly, and then calculate the final expected outcome for P1 given the different betting scenarios and the Daily Double probability.

Expected Value of game, based on historic Final Jeopardy outcomes
Expected Value of game, based on historic Final Jeopardy outcomes

Expected Value of game, based on uniform distribution of Final Jeopardy outcomes
Expected Value of game, based on uniform distribution of Final Jeopardy outcomes

Given this, the model suggests that:
  • If you go by historic figures, you should bet big if you think you have greater than a 61% chance of getting the Daily Double correct, otherwise you should bet zero
  • If you go by an even distribution, you should bet big if you have at least a 50/50 chance, otherwise you should bet zero
  • There is no scenario where a medium wager is optimal
Given that you know nothing about the Final Jeopardy category, it really becomes an exercise in determining whether you feel more confident in answering the Daily Double correctly or answering a Final Jeopardy question you know nothing about correctly.

So in the end, the Stegner Conjecture holds, although I'll admit that I would find it difficult to make that big of a bet unless I felt significantly more confident than the 50% or 61% suggested above. Part of that is the emotional player winning out over the rational player, but also an indication of the difficulty in ascertaining a quantitative degree of confidence in a situation like above.

Feb 1, 2014

Estimation Using Probability

Earlier this year, I read Forecasting by Combining Expert Opinion, which presented a method for using probability to generate a forecast based on inputs from multiple expert parties. To do this, the author created a triangular distribution for each expert based on that person's minimum, maximum, and most-likely estimates. They then ran a Monte Carlo simulation to arrive at the final forecast in the form of a probability distribution.

Revolution Analytics Stochastic Forecasting
Source: http://blog.revolutionanalytics.com/2014/01/forecasting-by-combining-expert-opinion.html

I thought this was a pretty smart and elegant way to arrive at a forecast, and started thinking of other areas where I've seen forecasts--project effort estimates, revenue forecasts, etc.--use much simpler means of prediction. Sometimes these are accurate, sometimes not at all.

In the world of agile software development, teams estimating the effort required to deliver product scope often use multi-point estimates as an input to their project-level estimate. For each story in their backlog, the team will provide an Aggressive But Possible (ABP) estimate, representing the effort estimate they expect to hit 50% of the time, and a Highly Probably (HP) estimate, representing an estimate they will hit 90% of the time.

Teams will then use any of a variety of methods to arrive at the product estimate; typically they will sum the ABP and then add buffer in the form of half the difference between ABP and HP, or the root sum of squares of the difference between the story-level ABP and HP.

A number of different resources point to the probability density function of agile estimates showing a right skewed normal distribution, like the one below from the Kanban Way. The ABP is the mode of the distribution, with the HP following further to the right on the curve.

Agile effort probability curve from the Kanban Way
Source: http://www.kanbanway.com/on-estimating-project-tasks

So now, I'm wondering about a couple things:
  1. How would a Monte Carlo simulation of project effort based on a distribution like the one above compare to the project estimates based on simpler estimation techniques?
  2. How would the estimate from a technique like the one I referenced at the beginning of the post compare to that coming out of a Wideband Delphi-based estimate?
I will note that I am a big proponent of Wideband Delphi, as I have used it with great success on a number of projects. The transfer of knowledge that comes during the estimation discussions can be extremely valuable. But that doesn't mean it wouldn't be fun to compare outputs of the two processes.