Decision Making

Parametric vs Non-Parametric Tests

In many statistical inference tests, you may have noticed an inherent assumption that the sample has been taken from a distribution (e.g. normal distribution). Well, those people are performing a parametric test. A non-parametric test doesn’t assume any distribution for its sample (means).

The parametric tests for means include t-tests (1-sample, 2-sample, paired), ANOVA, etc. On the other hand, the sign test is an example of a non-parametric test. A sign test can test a population median against a hypothesised value.

A few advantages of non-parametric tests include:

  1. Assumptions about the population are not necessary.
  2. It is more intuitive and does not require much statistical knowledge.
  3. It can analyse ordinal data, ranked data, and outliers
  4. It can be used even for small samples.
  5. Ideal type, if the median is a better measure.

Following are the typical parametric tests and the analogous non-parametric ones.

Parametric testsNonparametric tests
One sampleOne sample tSign test
Wilcoxon’s signed rank
Two samplePaired t Sign test
Wilcoxon’s signed rank 
Unpaired tMann-Whitney test
Kolmorogov-Smirnov test
K-sampleANOVAKruskal-Wallis test
Jonckheer test
2-way ANOVAFriedman test

References

Nonparametric Tests vs. Parametric Tests: Statistics By Jim

Non-parametric tests: zedstatistics

Nonparametric statistical tests for the continuous data: Korean J Anesthesiol

Parametric vs Non-Parametric Tests Read More »

Naive Bayes

Naive Bayes is a technique for building classifiers to distinguish one group from another. A simple example is to identify spam emails. It uses Bayes’ theorem to perform the job, hence the name.

What is the probability that the email I received is spam, given it has the words ‘money’ and ‘buy’?

Let’s build a spam detector from previous data. I have 100 emails, of which 75 are normal, and 25 are spam. 8 of the 75 normal emails contain the word ‘buy’, whereas 15 spam emails have the word. On the other hand, ‘money’ is present in 5 normal emails and 20 spam emails.

The probability that the email is normal, given it contains the words ‘buy’ and ‘money’, is proportional to the probability of seeing buy and money in a normal message x the probability of having a normal message. By the way, as you may have noticed, it appears like Bayes’ theorem.

P(N|B&M) α P(B&M|N) x P(N)

We know P(B&M|N) is (8/75) x (5/75) and P(N) is 75/100.

Extending the same logic, the probably that the email is spam given B&M is:

P(S|B&M) α P(B&M|S) x P(S)

P(B&M|S) is (15/25) x (20/25) and P(N) is 25/100.

P(B&M|N) x P(N) = 0.0053; P(B&M|S) x P(S) = 0.12

The email is more likely spam; the answer to the original question is obtained by applying Bayes’ theorem.

P(S|B&M) = P(B&M|S) x P(S) /[P(B&M|S) x P(S) + P(B&M|N) x P(N)] = 0.12/(0.12 +0.0053 ) = 96%

Naive Bayes Read More »

Confusion Matrix – Iris Dataset

The Iris dataset includes three species with 50 samples each and a few properties of each flower.

Confusion Matrix and Statistics

            Reference
Prediction   setosa versicolor virginica
  setosa         10          0         0
  versicolor      0          9         0
  virginica       0          1        10

Overall Statistics
                                          
               Accuracy : 0.9667          
                 95% CI : (0.8278, 0.9992)
    No Information Rate : 0.3333          
    P-Value [Acc > NIR] : 2.963e-13       
                                          
                  Kappa : 0.95            
                                          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: setosa Class: versicolor Class: virginica
Sensitivity                 1.0000            0.9000           1.0000
Specificity                 1.0000            1.0000           0.9500
Pos Pred Value              1.0000            1.0000           0.9091
Neg Pred Value              1.0000            0.9524           1.0000
Prevalence                  0.3333            0.3333           0.3333
Detection Rate              0.3333            0.3000           0.3333
Detection Prevalence        0.3333            0.3000           0.3667
Balanced Accuracy           1.0000            0.9500           0.9750

Confusion Matrix – Iris Dataset Read More »

Confusion Matrix – The Prevalence Problem

So, do you think the machine learning algorithm developed in the previous post is useful for predicting the sex of a person their height? In other words, what is the precision of the method?

The precision means the probability of the person being a female, given the prediction was for a female.

P(Y = 1 | \hat{Y} = 1)

Based on Bayes’ theorem

P(Y = 1 | \hat{Y} = 1) = P( \hat{Y} = 1 | Y = 1) \frac{P(Y = 1)}{P(\hat{Y} = 1)}

Note that P(Y = 1) is the prior probability of females in the system and not in the dataset. It is likely to be close to 0.5. Whereas we know the prevalence of females in the dataset is 0.23 (P( Y^ = 1)). This implies the ratio, actual vs the dataset, is 0.23/0.5 = 0.46; precision is less than 1 in 2.

Confusion Matrix – The Prevalence Problem Read More »

Confusion Matrix – Accuracy vs Sensitivity

Confusion Matrix and Statistics

          Reference
Prediction Female Male
    Female     55   24
    Male       64  383
                                         
               Accuracy : 0.8327         
                 95% CI : (0.798, 0.8636)
    No Information Rate : 0.7738         
    P-Value [Acc > NIR] : 0.0005217      
                                         
                  Kappa : 0.4576         
                                         
 Mcnemar's Test P-Value : 3.219e-05      
                                         
            Sensitivity : 0.4622         
            Specificity : 0.9410         
         Pos Pred Value : 0.6962         
         Neg Pred Value : 0.8568         
             Prevalence : 0.2262         
         Detection Rate : 0.1046         
   Detection Prevalence : 0.1502         
      Balanced Accuracy : 0.7016         
                                         
       'Positive' Class : Female         
                                    

We see the prediction we did had high overall accuracy. At the same time, we see it had a low sensitivity. It happened because of the low prevalence (proportion of females), 23%. That means failing to call actual females as females (low sensitivity) does not lower the accuracy as much as it would have by incorrectly calling males as females.

Here is the R code behind this fancy plot!

heights %>% 
  ggplot(aes(x = height)) +
geom_histogram(aes(color = sex, fill = sex), alpha = 0.4, position = "identity") + 
  geom_freqpoly( aes(color = sex, linetype = sex), bins = 30, size = 1.5) +
    scale_fill_manual(values = c("#00AFBB", "#FC4E07")) +
  scale_color_manual(values = c("#00AFBB", "#FC4E07")) +
  coord_cartesian(xlim = c(50, 80)) +  
  scale_x_continuous(breaks = seq(50, 80, 10), name = "Height [in]") +
  theme(text = element_text(color = "white"), 
        panel.background = element_rect(fill = "black"), 
        plot.background = element_rect(fill = "black"),
        panel.grid = element_blank(),
        axis.text = element_text(color = "white"),
        axis.ticks = element_line(color = "white")) 

Looking at the plot, we see that the cut-off we used, 64 inches, misses a significant proportion of females. Let’s re-run the simulations after adding two more points (66 inches) to the cut-off.

Confusion Matrix and Statistics

          Reference
Prediction Female Male
    Female     82   66
    Male       33  345
                                          
               Accuracy : 0.8118          
                 95% CI : (0.7757, 0.8443)
    No Information Rate : 0.7814          
    P-Value [Acc > NIR] : 0.049151        
                                          
                  Kappa : 0.5007          
                                          
 Mcnemar's Test P-Value : 0.001299        
                                          
            Sensitivity : 0.7130          
            Specificity : 0.8394          
         Pos Pred Value : 0.5541          
         Neg Pred Value : 0.9127          
             Prevalence : 0.2186          
         Detection Rate : 0.1559          
   Detection Prevalence : 0.2814          
      Balanced Accuracy : 0.7762          
                                          
       'Positive' Class : Female  

Confusion Matrix – Accuracy vs Sensitivity Read More »

Confusion Matrix – Continued

We have seen output from the ‘confusionMatrix’ command in the ‘caret’ package.

Confusion Matrix and Statistics

          Reference
Prediction Female Male
    Female     55   24
    Male       64  383
                                         
               Accuracy : 0.8327         
                 95% CI : (0.798, 0.8636)
    No Information Rate : 0.7738         
    P-Value [Acc > NIR] : 0.0005217      
                                         
                  Kappa : 0.4576         
                                         
 Mcnemar's Test P-Value : 3.219e-05      
                                         
            Sensitivity : 0.4622         
            Specificity : 0.9410         
         Pos Pred Value : 0.6962         
         Neg Pred Value : 0.8568         
             Prevalence : 0.2262         
         Detection Rate : 0.1046         
   Detection Prevalence : 0.1502         
      Balanced Accuracy : 0.7016         
                                         
       'Positive' Class : Female         
                                    
Female
(Actual)
Male
(Actual)
Female
(Predicted)
55
(TP)
24
(FP)
Male
(Predicted)
64
(FN)
383
(TN)
TP – true positive, FP – false positive, FN – false negative, TN – true negative

Accuracy is the proportion of cases where the model correctly predicted the outcome.
(TP + TN) / Total
(55+383)/(55+64+24+383) = 0.8327

Sensitivity is the proportion of females the model correctly predicted.
TP/(TP + FN)
55/(55+64) = 0.4622

Specificity is the proportion of males the model correctly predicted.
TN/(TN + FP)
383/(24+383) = 0.9410

Positive predictive value (PPV)
TP/(TP + FP)
55/(55+24) = 0.6962

Negative predictive value (NPV)
TN/(TN + FN)
383/(383 + 64) = 0.8568

Prevalence is the proportion of females in the total sample set.
(TP + FN) / (TP + FN + FP + TN)
(55+64)/(55+64+24+383) = 0.2262

Detection rate is the proportion of females in total.
TP/(TP + FN + FP + TN)
55/(55+64 + 24+383) = 0.1046

Detection Prevalence is the proportion of predicted females in total.
(TP+FP)/(TP + FN + FP + TN)
(55+24)/(55+64 + 24+383) = 0.1502

Balanced Accuracy is (sensitivity+specificity)/2
(0.4622 + 0.9410)/2 = 0.7016

Confusion Matrix – Continued Read More »

Confusion Matrix

Machine learning is a technique to train a model (algorithm) using a dataset for which we know the outcome and then use the algorithm, making predictions where we don’t have the outcome. The confusion matrix is the summary in a tabular form, highlighting the model performance.

Height data

We develop a simple machine learning algorithm predicting the sex of a person from the height data. The R package to help here is ‘caret’. Here are the first ten entries of the dataset that contains 1050 members.

The first thing is to check whether we can distinguish between the heights of males and females. The following command summarises the mean and standard deviation of heights.

heights %>% group_by(sex) %>% summarize(Mean = mean(height), SD = sd(height))

Yes, the males are a little taller than the females, and we use this property to make decisions. I.e., assign the output as male if the height is greater than 64 inches and female otherwise. But before getting into the calculations, we divide the dataset randomly into two halves – training set and test set – using the ‘createDataPartition’ in the ‘caret’ package.

test_index <- createDataPartition(heights$height, times = 1, p = 0.5, list = FALSE)
train_set <- heights[-test_index,]
test_set <- heights[test_index,]

Now, we have two sets with 526 members each. We apply the algorithm to the training set and see the results.

y_hat <- ifelse(train_set$height > 64, "Male", "Female") %>%  factor(levels = levels(train_set$sex))
mean(y_hat == train_set$sex)

We apply the formulation to the test set and get the confusion matrix.

y_hat <- ifelse(test_set$height > 64, "Male", "Female") %>% 
  factor(levels = levels(test_set$sex))

confusionMatrix(data = y_hat, reference = test_set$sex)
Confusion Matrix and Statistics

          Reference
Prediction Female Male
    Female     55   24
    Male       64  383
                                         
               Accuracy : 0.8327         
                 95% CI : (0.798, 0.8636)
    No Information Rate : 0.7738         
    P-Value [Acc > NIR] : 0.0005217      
                                         
                  Kappa : 0.4576         
                                         
 Mcnemar's Test P-Value : 3.219e-05      
                                         
            Sensitivity : 0.4622         
            Specificity : 0.9410         
         Pos Pred Value : 0.6962         
         Neg Pred Value : 0.8568         
             Prevalence : 0.2262         
         Detection Rate : 0.1046         
   Detection Prevalence : 0.1502         
      Balanced Accuracy : 0.7016         
                                         
       'Positive' Class : Female         
                                    

In the tabular form,

Actual
FemaleMale
PredictedFemale5524
Male64383

The rows on the confusion matrix present what the algorithm predicted, and the columns correspond to the known truth. The output provides a bunch of other metrics. That is next.

Confusion Matrix Read More »

BreakUp Drinking

Here is the data on alcohol consumption before and after the breakup. There is an assumption that the drinking habit increases post-breakup. Is that true?

BeforeAfter
470408
354439
496321
351437
349335
449344
378318
359492
469531
329417
389358
497391
493398
268394
445508
287399
338345
271341
412326
335467

The null hypothesis, H0: (consumption after – before) = 0.
The alternative hypothesis, HA: (consumption after – before) > 0.

T-Test

\textrm{T-Statistic } = \frac{D - \mu_d}{S_d/\sqrt{n}}

D = Mean difference of the parameter after and before
mud = hypothesised mean difference
Sd = Standard deviation of the difference
n = number of samples

We insert the data in the following command and run the function, t.test.

Before <- c(470, 354, 496, 351, 349, 449, 378, 359, 469, 329, 389, 497, 493, 268, 445, 287, 338, 271, 412, 335)

After  <- c(408, 439, 321, 437, 335, 344, 318, 492, 531, 417, 358, 391, 398, 394, 508, 399, 345, 341, 326, 467)

t.test(Before, After, paired = TRUE, alternative = "greater")
	Paired t-test

data:  Before and After
t = -0.53754, df = 19, p-value = 0.7014
alternative hypothesis: true mean difference is greater than 0
95 percent confidence interval:
 -48.49262       Inf
sample estimates:
mean difference 
          -11.5 

There was a difference of -11.5, yet the p-value (0.7014) is higher than the critical value we chose (0.05). The test shows no evidence supporting the hypothesis.

BreakUp Drinking Read More »

The Nocebo Effect

A placebo is used in clinical trials as a control in drug studies to test the effectiveness of treatments. In drug trials, one group of participants receives the medicines (to be tested), and the other gets the placebo (say, sugar pills).

The concept of placebo stems from the assumption that a treatment has two components, one related to the specific effects of the treatment and the other, nonspecific, related to its perception. When the nonspecific effects are beneficial to the participant, it is called a placebo, and when they are harmful, it is a nocebo.

Hypothesis Testing

The null hypothesis (H0) typically represents the default state or the state of “no effect“. For example, you compare the means of two groups, such as people who took a particular drug and people who received the placebo. As a drug researcher, your objective is to find the effectiveness of the medicine. And that lays the foundation for your alternative hypothesis (HA or H1) – that the drug has a non-zero effect. The default state (H0) assumes the drug has no impact. To be specific, H0 assumes the difference between two means equals zero.

The Nocebo Effect Read More »

Power of a Statistical Test

The power of a statistical test is the probability of rejecting the null hypothesis when the null hypothesis is false (or the effect is present). It is the right decision, and before we go deeper into it, let’s recap the two types of errors in hypothesis testing.

  1. A type I error is when the Null hypothesis is true, but you rejected it.
  2. A type II error is when you fail to reject a false Null hypothesis; in other words, the effect is present.

In a tabular format,

Null hypothesis (H0)
is TRUE
Null hypothesis (H0)
is FALSE
FAIL to RejectCorrect Decision
(probability = 1 – α)
Type II Error
(probability = β)
Reject Type I Error
(probability = α)
Correct Decision
(probability = 1 – β)

Your guess is right, Power equals 1 – β.

Power of a Statistical Test Read More »