January 2023

Sheriff’s Dilemma

The Sheriff’s Dilemma is an example of a simultaneous move Bayesian game. In a standard game, the Nash equilibrium is formed by a player’s understanding of the other player. Whereas in Bayesian, the type (of the other) also matters. We will see that through an example. But before that, the rules.

Civilian vs criminal

The sheriff encounters an armed suspect, and they must decide whether to shoot at the other.

  1. The suspect could be a criminal with probability p and civilian with (1-p).
  2. The sheriff shoots if the suspect shoots
  3. The criminal always shoots

The payoffs are

civilian (1-p)sheriff
ShootNot
suspectShoot-3,-1-1,-2
Not -2,-10,0
criminal (p)sheriff
ShootNot
suspectShoot0,02,-2
Not -2,-1-1,1

Before moving to the sheriff, let’s find out the strategy of the suspect. If the suspect is a civilian, his dominant strategy is not to shoot (-2 > -3 AND 0 > -1). For the criminal, the dominant strategy is to shoot (0 > -2 AND 2 > -1).

The sheriff’s payoff

The expected payoffs if the sheriff shoots is = -1 x (1-p) + 0 x p = p – 1
The expected payoffs if the sheriff doesn’t shoot is = 0 x (1-p) -2 x p = -2p

So, the payoffs match for p = 1/3. If p is greater than 1/3, the sheriff is better off if he shoots.

Sheriff’s Dilemma Read More »

Penalty Kicks Continued

A quick recap: the striker has the option to kick to his left or right, and the goalie to dive likewise. The payoff matrix is:

The striker’s probability of shooting to the left is psand he aims to provide no incentive for the goalie to dive either to the left or right.

dive to left = dive to right
-0.3*ps -0.8*(1-ps) = -0.9*ps -0.2*(1-ps)
(-0.3 + 0.8 + 0.9 – 0.2)*ps = 0.8 – 0.2
ps = 0.6/1.2 = 0.5

The goalie’s probability of diving to the left is pg, and he aims to provide no incentive for the striker to shoot either to the left or right.

strike to left = strike to right
0.3*pg + 0.9*(1-pg) = 0.8*pg + 0.2*(1-pg)
(0.3 – 0.9 – 0.8 + 0.2)*pg = 0.2 – 0.9
pg = 0.7/1.2 = 0.58

The number of goals =
ps*pg*corresponding chance of a goal+
ps*(1-pg)*corresponding chance of a goal+
(1-ps)*pg*corresponding chance of a goal+
(1-ps)*(1-pg)*corresponding chance of a goal

The number of goals = 0.5*0.58*0.3 + 0.5*0.42*0.9 + 0.5*0.58*0.8 + 0.5*0.42*0.2 = 0.55

About a 55% chance of scoring a goal.

Penalty Kicks Continued Read More »

Penalty Kicks – Game Theory

It’s game theory time once again. Today, we’ll discuss strategic options available to layers while taking (and saving) football penalty kicks. In penalty kicks, the striker of the ball gets a chance to aim at the goal from about 11 m distance with only the goalkeeper to protect.

Since the reaction time is short, the goalkeeper must decide to judge the ball’s direction almost instantaneously, making the whole process a simultaneous game. Let’s build a payoff matrix reflecting such a game.

Before answering that question, we must know that the striker will randomise his shots, or it will hand a huge advantage over to the goalkeeper. And the goalie aims to make the striker indifferent to striking to the left vs the right. Then, find out the probability (for the goalkeeper to dive left) at which the striker’s payoffs are equal.

strike to left = strike to right

0.3*p + 0.9*(1-p) = 0.8*p + 0.2*(1-p)
(0.3 – 0.9 – 0.8 + 0.2)*p = 0.2 – 0.9
p = 0.7/1.2 = 0.58

The goalkeeper should try to dive left slightly more often. So, how many goals are expected in such a scenario? That is next.

Penalty Kicks – Game Theory Read More »

Battle of Sexes – Bayesian

You know the famous game-theory subject, the battle of the sexes. In a nutshell, it’s a game between A, who likes football and B, who prefers dance. But they also value each other’s company more than they dislike each other’s interests. Here is the overall payoff matrix written on the tastes.

B
FootballDance
AFootballA:10, B:5A:0, B:0
DanceA:0, B:0A:5, B:10

In the classical case, A and B have 100% certainty about what the other person likes. Imagine, A becomes moody a few days a month and wants to be alone on those days. So from B’s standpoint, she knows A could be in a bad mood but doesn’t know when. So she attaches a probability, p, for A’s state.

From the side of B, there is a chance p that A wants her company.

B
FootballDance
AFootballA:10, B:5A:0, B:0
DanceA:0, B:0A:5, B:10

And a chance (1-p) A doesn’t. And here is the corresponding payoff matrix.

B
FootballDance
AFootballA:0, B:5A:10, B:0
DanceA:5, B:0A:0, B:10

Such cases come under the category of Bayesian Nash Equilibrium. In the original Nash equilibrium case, a player does things based on what the other player will do. In the case of Bayesian, the player acts, given what she knows the other person could do.

Battle of Sexes – Bayesian Read More »

The Muller-Lyer illusion

Look at the above graphic. There are two lines terminated with either arrowheads or arrow tails. While the length of the lines in both cases is the same, the illusion created by the form of the terminals makes our brain believe that the one on the bottom is longer. This is the Muller-Lyer illusion.

The Muller-Lyer illusion Read More »

Wisdom of the Crowd and Winner’s Curse

The wisdom of the crowd is an idea that stems from the fact that the average estimation by a group of people is better than by individual experts. In other words, when a large group of non-experts (not biased by knowledge!) possessing diverse opinions starts predicting a quantity, their assessment tends to form a kind of bell curve – a large pack in the middle and outliers nicely distributed on either side.

In other words, the outlier of the crowd has a lower probability of estimating it accurately. Mark this line; we need it later.

Estimating the weight

Let’s go back to Francis Galton (1907) and the story of the prize-winning-ox. It was a competition in which a crowd of about 800 people participated to predict the weight of an ox after it had been slaughtered and dressed. The person whose prediction came closer would win the prize. On the event in which Galton participated, he found a nearly normal distribution of ‘votes’, and the middlemost value, the popular choice or the vox populi, was 1207 lb which was not far from the actual dressed weight of 1198 lb.

Bidding for the meat

Now, change the scenario: the winner is no longer the predictor of weight but who will pay the most. Therefore, by definition, the people in the middle of the pack, those with a better estimation of the actual value of the meat (estimated weight x market price), are not going to get the prize. The bid belongs to the person furthest outlier (to the right) of the distribution. This is the winner’s curse – the winner is the one who overvalues the object. The only time it doesn’t apply is if the winner attaches a personal value to it, such as collecting a painting.

References

Vox Populi, Nature, 1907, 75, 450

Wisdom of the Crowd and Winner’s Curse Read More »

Posterior as a Compromise

It is interesting to see how posterior distributions are a compromise between prior knowledge and the likelihood. An extreme, funny case is a coin that is thought to be bimodal, say at 0.25 and 0.75. But when data was collected, it gave almost equal heads and tails.

Posterior as a Compromise Read More »

Twin Paradox

First thing first: the twin paradox is not a paradox! Now, what is it?

Before we go to the twin paradox, we must know the concept of relativity of simultaneity. It is a central concept in the special theory of relativity and happens because the speed of light is constant. A famous thought experiment is when a light flashes at the centre point of a train, running at constant velocity. To the observer inside the train, the light will reach the engine and the tail simultaneously (the same distance from the light source). But for a standing observer on the platform, the light will hit the back of the train first, as it is catching up, and strike the engine last, as it is going away from the light source. And both are right. Or the distant simultaneity depends on the reference point.

Put differently, A and B are two objects. And A moves towards the static B at a constant velocity. But from A’s vantage point, it feels stationary, and B is moving towards A. Both A and B are correct.

Over the twin paradox: Anne and Becky are twins. Becky goes away in a spaceship to a distant planet and comes back. From the stay-at-home Anne’s perspective, Becky’s clock is running slow due to the special theory of relativity. So, when she comes back, Becky will be younger than Anne. But Becky, while heading back, looks at Anne and says it was Anne who was moving towards her (in her perspective), so Anne is younger. How can both be happening? So, it’s a paradox.

Interestingly, this time, we can’t say both are right. Anne is right; Becky is the younger of the two when she returns. The only time one can claim to be at rest and the rest of the world is moving is when the moving person is moving with constant velocity. Sadly, Becky cannot claim it; she changed her direction to return and created acceleration. Remember: velocity comprises speed as well as direction. On the other hand, Anne’s version is valid as she had no acceleration but was standing at constant (zero) velocity.

WSU: Space, Time, and Einstein with Brian Greene

Twin Paradox Read More »

Impact of Prior – Continued

Last time, we have seen how the choice of prior impacts the Bayesian inference (the updating of knowledge utilising new data). In the illustration, a well-defined (narrower) distribution of existing understanding more or less remained the same after ten new, mostly contradicting data.

Now, the same situation but collected 100 data, with 80% leading to tails (the same proportion as before).

Now, the inference is leaning towards new compelling pieces of evidence. While Bayesian analysis never prohibits the use of broad and non-specific beliefs, the value of having well-defined facts is indisputable, as illustrated in these examples.

If there are multiple sets of prior available, it is prudent to check their impact on the posterior and map their sensitivities. Sets of priors can also be joined (pooled) together for inference.

Impact of Prior – Continued Read More »

Bias in a Coin – Impact of Prior

We have seen in an earlier post how the Bayes equation is applied to parameter values and data, using the example of coin tosses. The whole process is known as the Bayesian inference. There are three steps in the process – choose a prior probability distribution of the parameter, build the likelihood model based on the collected data, multiply the two and divide by the probability of obtaining the data. We have seen several examples where the application of the equation to discrete numbers, but in most real-life inference problems, it’s applied to continuous mathematical functions.

The objective

The objective of the investigation is to find out the bias of a coin after discovering that ten tosses have resulted in eight tails and two heads. The bias of a coin is the chance of getting the observed outcome; in our case, it’s the head. Therefore, for a fair coin, the bias = 0.5.

The likelihood model

It is the mathematical expression for the likelihood function for every possible parameter. For processes such as coin flipping, Bernoulli distribution perfectly describes the likelihood function.

P(\gamma|\theta) = \theta^\gamma (1-\theta)^{(1-\gamma)}

Gamma in the equation represents an outcome (head or tail). If the coin is tossed ‘i’ times and obtains several heads and tails, the function becomes,

\\ P(\{\gamma_i\}|\theta) = \Pi_i P(\gamma_i|\theta) =  \Pi_i \theta^{\gamma_i} (1-\theta)^{(1-\gamma_i)}  \\ \\ = \theta^{\Sigma_i\gamma_i} (1-\theta)^{\Sigma_i(1-\gamma_i)} \\ \\ = \theta^{\#heads} (1-\theta)^{\#tails}

The calculations

1. Uniform prior: The prior probability of the Bayes equation is also known as belief. In the first case, we do not have any certainty regarding the bias. Therefore, we assume all values for theta (the parameter) are possible as the prior belief.

theta <- seq(0,1,length = 1001)
ptheta <- c(0,rep(1,999),0)
ptheta <- ptheta / sum(ptheta)

D_data <- c(rep(0,8), rep(1,2))
zz <- sum(D_data)
nn <- length(D_data)

pData_theta <- theta^zz *(1-theta)^(nn-zz)

pData <- sum(pData_theta*ptheta)

ptheta_Data <- pData_theta*ptheta/pData

2. Narrow prior: Suppose we have high certainty about the bias. We think that is between 0.4 to 0.6. And we repeat the process,

theta <- seq(0,1,length = 1001)
ptheta <- dnorm(theta, 0.5, 0.1)

D_data <- c(rep(0,8), rep(1,2))
zz <- sum(D_data)
nn <- length(D_data)

pData_theta <- theta^zz *(1-theta)^(nn-zz)

pData <- sum(pData_theta*ptheta)

ptheta_Data <- pData_theta*ptheta/pData

In summary

The first figure demonstrates that if we have a weak knowledge of the prior, reflected in the broader spread of credibility or the parameter values, the posterior or the updated belief moves towards the gathered evidence (eight tails and two heads) within a few experiments (10 flips).

On the other hand, the prior in the second figure reflects certainty, which may have been due to previous knowledge. In such cases, contradictory data from a few flips is not adequate to move the posterior towards it.

But what happens if we collect a lot of data? We’ll see next.

Bias in a Coin – Impact of Prior Read More »