Decision Making

Markov Chains – Absorbing chains

Let’s revisit the three stores. But there is a slight difference this time.

There is no arrow from store A to B or C. The customers who reach A are fully retained or absorbed. This is an absorbing Markov chain. The transition matrix for the above situation is,

\begin{bmatrix} 1 & 0.2 & 0.1\\ 0 & 0.7 & 0.3\\ 0 & 0.1 & 0.6 \end{bmatrix}

You may have guessed it already, but if you continue this chain to develop, the end-state distribution becomes,

\begin{bmatrix} 1 & 0.2 & 0.1\\ 0 & 0.7 & 0.3\\ 0 & 0.1 & 0.6 \end{bmatrix} * \begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} A \\ B \\ C \end{bmatrix}

A + 0.2B + 0.1 C = A
0 + 0.7B + 0.3 C = B
0 + 0.1B + 0.6 C = C
A + B + C = 1
A + 1.2B + 1.1 C = 1
0 - 0.3B + 0.3 C = 0
0 + 0.1B - 0.4 C = 0

\begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}

Eventually, all customers end up in store A.

Markov Chains – Absorbing chains Read More »

Markov Chains – Visting Friends

Amy lives in city A and has friends in cities B, C, D, and E. The following scheme represents the direct bus connections to these cities. Every Monday, she takes a direct bus at random, reaches the town, and stays with her friend for a week. How many weeks would she have spent in each city that year if she had done this for a year?

The route map suggests that on a given week, Amy has the following probabilities
1) Equal chances from A to B or D (0.5, 0.5)
2) Equal chances from B to A, C, or D (0.33, 0.33, 33)
3) Equal chances from C to B, D (0.5, 0.5)
4) Equal chances from D to A, B, C or E (0.25, 0.25, 0.25, 0.25)
5) Can move from E to D (1.0)

In the matrix form

0.0 0.33  0.0  0.25   0.0
0.5 0.00  0.5  0.25   0.0
0.0 0.33  0.0  0.25   0.0
0.5 0.33  0.5  0.00   1.0
0.0 0.0   0.0  0.25   0.0

The required task is to find the stable end distribution of cities, which can be done using the relationship.

P X* = X*

The expansion of the above equation leads to the following linear relationship that we will solve analytically.

0A + 0.33B + 0C + 0.25D + 0E = A
0.5A + 0B + 0.5C + 0.25D + 0E = B
0A + 0.33B + 0C + 0.25D + 0E = C
0.5A + 0.33B + 0.5C + 0D + 1E = D
0A + 0B + 0C + 0.25D + 0E = E
A + B + C + D + E = 1

Substitute for A in the first equation and solve the set of 5 linear equations using the following R program.

A <- matrix(c( 0.0,   0.5,  0.0,   0.50,  0.0, 
               1.333, -1.0, 0.333, 0.333, 0.0, 
               1.0,   0.5,  -1.0,  0.5,   0.0,  
               1.25,  0.25, 0.25,  -1.0,  0.25, 
               1.0,   0.0,  0.0,   1.0,  -1.0), nrow = 5)
B <- c(1, 0, 0, 0, 0) 

solve(A, B)
0.16686465 0.25007815 0.16661457 0.33335417 0.08333854

Markov Chains – Visting Friends Read More »

Markov Chains – The Returning Visitor

Let’s take another example. For a particular website, 40% of visitors don’t return next time. Of the non-visitors, 10% turn up next time. What is the website’s end-state distribution? Let V represent the visitor and NV the non-visitor.

We know the property of the end state: the distribution at which P X* = X*. Let’s fill in the available data.

The columns must add up to 100% or 1.

Let’s now expand the equation and change percentage data to fractions.

0.6 V + 0.1 NV = V
0.4 V + 0.9 NV = NV

NV = 4 V
But we know V + NV must be 1
V + 4 V = 1
V = 1/5
NV = 4/5

20% will visit, and 80% won’t.

Markov Chains – The Returning Visitor Read More »

Markov Chains – The Stable End Distribution

We have seen that the initial distribution of counts (or the proportion) reaches a stable state after a few generations.

In other words, the (n+1)th and the nth give the same result for the same given initial distribution and the probability matrix.

Let’s work out for n = 10. X10 = P10 X0

pM <- matrix(c(0.8, 0.1, 0.1, 0.2, 0.7, 0.1, 0.1, 0.3, 0.6), nrow = 3)
stage0 <- c(0.4, 0.24, 0.36)
stage10 <- pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%stage0 
stage10
       [,1]
[1,] 0.45
[2,] 0.35
[3,] 0.20

And then for n = 11. X11 = P11 X0

pM <- matrix(c(0.8, 0.1, 0.1, 0.2, 0.7, 0.1, 0.1, 0.3, 0.6), nrow = 3)
stage0 <- c(0.4, 0.24, 0.36)
stage11 <-pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%pM%*%stage0  
stage11
     [,1]
[1,] 0.45
[2,] 0.35
[3,] 0.20

It would be interesting to see how the probability matrix is transformed as the number of stages progresses. Here is the original matrix (P), followed by (P10).

0.8  0.2  0.1
0.1  0.7  0.3
0.1  0.1  0.6       
0.45 0.45 0.45
0.35 0.35 0.35
0.20 0.20 0.20

Elements in each raw are similar in magnitude, which suggests why the end result after multiplying it with the original proportion is the same after each stage.

Another interesting fact is how Xn changes with the initial proportion, X0. We use the following X0 values. Note that the sum of the proportions must add to 1.

stagen %*% c(0.4, 0.24, 0.36)
stagen %*% c(0.3, 0.4, 0.3)
stagen %*% c(0.1, 0.1, 0.8)
stagen %*% c(0.1, 0.8, 0.1)

There is no prize for guessing – they all lead to the same Xn value!

0.45
0.35
0.20

Markov Chains – The Stable End Distribution Read More »

Markov Chains – The Chain Reaction

Last time, we saw how the next stage is developed from the current stage using the probability matrix.

X1 = P X0

There is a reason why it is called the Markov chain. The system we have developed can now predict the stage after the next stage, i.e., X2. Multiply the probability matrix with X1.

X2 = P X1

Substituting for X1,
X2 = P P X0
X2 = P2 X0
or in general
Xn = Pn X0

Let’s work out the stage after 10 moves and plot and see where A ends up.

Markov Chains – The Chain Reaction Read More »

Markov Chains – Now to Next

In the last post, we started an example to demonstrate the objective of Markov processes.

The question is, what is the expected number of customers in shops A, B, and C in the following week?

The whole formulation is nothing but future state = present state x a probability matrix. The probability matrix, fundamental to making this translation, is developed from the diagram we created earlier.

The first row of the matrix are the probabilities: A to A, B to A and C to A
The second row: A to B, B to B and C to B
The third row: A to C, B to C and C to C

Perform the matrix multiplication between the two. Here are the R codes.

pM <- matrix(c(0.8, 0.1, 0.1, 0.2, 0.7, 0.1, 0.1, 0.3, 0.6), nrow = 3)
pM
    [,1] [,2] [,3]
[1,]  0.8  0.2  0.1
[2,]  0.1  0.7  0.3
[3,]  0.1  0.1  0.6

pM <- matrix(c(0.8, 0.1, 0.1, 0.2, 0.7, 0.1, 0.1, 0.3, 0.6), nrow = 3)
past <- c(0.4, 0.24, 0.36)
future <- pM%*%past
future
      [,1]
[1,] 0.404
[2,] 0.316
[3,] 0.280

What are Markov Chains: An Introduction: Michel van Biezen

Markov Chains – Now to Next Read More »

Markov Chains

The Markov approach is a concept for predicting stochastic processes. It models a sequence in which the probability of the following (n+1) event depends only on the state of the previous (n) event. Therefore, it is also called a ‘memoryless’ process.

Before performing calculations, let’s familiarise ourselves with the concept and notations. Suppose there are two states: state A and state B. The process is expected to stay in the same stage for 20% of the time and can move to stage B in the remaining 80%. On the other hand, stage B has a 30% chance of staying and a 70% chance of moving to stage A. The following diagram depicts the process.

For example, imagine three shops in town—A, B, and C—that attract 200, 120, and 180 customers, respectively, this week. The following possibilities are expected for next week.

Shop A:
80% of the customers stay loyal
10% can move shop B
10% can move shop C

Shop B:
70% of the customers stay loyal
20% can move shop A
10% can move shop C

Shop C:
60% of the customers stay loyal
10% can move shop A
30% can move shop B

The question is, what is the expected number of customers in shops A, B, and C in the following week?

Markov Chains Read More »

Trees of Logic

An evil mathematician catches Annie and Ben, putting them in two separate cells. Annie can see 12 trees outside her cell, and Ben can see 8. They are told that together, they can see all the trees, and no tree is seen by both. Every morning, evil comes to each of them and asks if they think there are 18 or 20 trees in total. They can answer or pass. If the answer to either is wrong, they both will be in prison forever. On the other hand, if one of them answers, both will be freed. The questions are asked separately, and Annie and Ben can’t communicate. Can they escape?

Day 1: Had Annie seen 19 or 20 trees, she would have answered 20 to the evil. Since Annie sees only 12 trees, she passes. She also knows Ben sees 6 or 8.

Since the question comes to Ben, who sees 8 trees, he knows that Annie sees 10 or 12 trees. He passes.

Day 2: Annie knows Ben did not answer yesterday, so the question returned on day 2. She also knows that Ben knows she has seen 10 or 12. She thinks if Ben had seen 6, he would have added 12 + 6 = 18 and given the number 18 as the answer. This is because 10 + 6 = 16 is not among the choices. So, she concludes, Ben sees 8 trees. She answers 20 to the evil.

 Impossible Escape Logic Puzzle: MindYourDecisions

Trees of Logic Read More »

Markovian Mouse

There are seven doors, and a mouse is at door 4. Two cats are waiting at door 1 and door 7. The rat moves one door in a day—either to the left or to the right. When it reaches the cat door, it gets eaten by the cat. What is the average number of days before the rat gets caught?

We can rename the arrangement since the mouse sits in the middle of the door sequence.

In the beginning, the mouse is at door 3, and let e3 be the expected time until the mouse gets caught by the cat. After one day, the mouse has a 50% chance of reaching the left or a 50% chance of reaching the right door. Either way, it reaches door 2 and lets e2 be the expected time until caught. Therefore,

e3 = 1 + e2

When the rat is one 2, after one day, the mouse has a 50% chance of reaching door 3 (waiting time e3) or a 50% chance of reaching door 1 (waiting time e1).

e2 = 1 + 0.5 e3 + 0.5 e1

From there, it either reaches door 2 or gets caught in a day.

e1 = 1 + 0.5 e2

Now, we have three linear equations to solve.
e3 = 1 + e2
e2 = 1 + 0.5 e3 + 0.5 e1
e1 = 1 + 0.5 e2

e2 = 1 + 0.5 + 0.5e2 + 0.5 e1
0.5 e2 = 1.5 + 0.5 e1
0.5 e2 = 1.5 + 0.5 + 0.25e2
0.25e2 = 2

e2 = 8
e3 = 1 + 8 = 9
e1 = 1 + 0.5 x 9 = 4

It takes an average of 9 days for the mouse to get caught.

The Cats And Random Mouse Riddle: MindYourDecisions

Markovian Mouse Read More »

The Best Lottery Strategy

Amy walks into a raffle house and finds it about to close. They are raffling off an object with a value of $1000. She finds that only 200 tickets have been sold. Knowing they will draw the winner at any time, how many tickets should Amy purchase to maximise the expected value? The cost is $1 per ticket.

The expected payoff = expected value of the lottery – the price you paid.
= value of the object x the probability of winning – the price you paid.
the probability of winning = # ticket you bought / total # tickets sold.

If x is the number of tickets Amy purchased,
The expected payoff = [1000 * x /(200 + x)] – x

So, we need to find the x that maximises the payoff. One way to determine is to plot the expected payoff ([1000 * x /(200 + x)] – x) as a function of the # tickets (x) you purchased and see where it maximises.

The number of tickets that maximises the expected payoff is somewhere close to 250.

What’s the Best Raffle Strategy?: William Spaniel

The Best Lottery Strategy Read More »