Data & Statistics

Family-Wise Error Rate

Imagine a fair coin is tossed 10 times to test the hypothesis, H0: the coin is unbiased. The coin likely lands on heads (or tails) in about 5 (or 4, 3, 2) of those. If it landed 4 times on heads and 6 in tails, we can do a simple chi-squared test to verify.
chi-squared = (4 – 5)2 /5 + (4 – 5)2 /5 = 0.4. 0.4 is too low; we can’t reject null

But what happens if all the tosses land on tails?
chi-squared = (0 – 5)2 /5 + (10 – 5)2 /5 = 10. We reject at 99% confidence level. We know the probability of this happening is (1/2)10 = 1/1024.

chisq.test(c(0,10),p=c(0.5,0.5))
	Chi-squared test for given probabilities

data:  c(0, 10)
X-squared = 10, df = 1, p-value = 0.001565

What about 1024 players, each having a (fair) coin toss 10 times each?

1 - dbinom(0, 1024, 1/1024)
0.63

In other words, there is a possibility that one person will reject the null hypothesis and conclude that the coin is based! An incorrect rejection of a null hypothesis or a false positive result. In other words, if we test a lot (a family) of hypotheses, there is a high probability of getting one very small p-value by chance.

Family-Wise Error Rate Read More »

The Log-Rank Test for Survival

Here are Kaplan – Meier plots for males and females taken from the results of a cancer study. The data comes from the ‘BrainCancer’ dataset from the R library ISLR2. It contains the survival times for patients with primary brain tumours undergoing treatment.

At first glance, it appears that females were doing better, up to about 50 months, until the two lines merged. The question is: is the difference (between the two survival plots) statistically significant?

You may think of using two-sample t-tests comparing the means of survival times. But the presence of censoring makes life difficult. So, we use the log-rank test.

The idea here is to test the null hypothesis, H0, that the expected value of the random variable X, E(X) = 0, and to build a test statistic of the following form,

W = \frac{X - E(X)}{\sqrt{Var(X)}}

X is the sum of the number of people who died at each time.

X = \sum\limits_{k = 1}^{K} q_{1k}

R does the job for you; use the library, survival.

library(ISLR2)
attach(BrainCancer)
as_tibble(BrainCancer)
library(survival)
survdiff(Surv(time, status) ~ sex)
Call:
survdiff(formula = Surv(time, status) ~ sex)

            N Observed Expected (O-E)^2/E (O-E)^2/V
sex=Female 45       15     18.5     0.676      1.44
sex=Male   43       20     16.5     0.761      1.44

 Chisq= 1.4  on 1 degrees of freedom, p= 0.2 

p = 0.2; we cannot reject the null hypothesis of no difference in survival curves between females and males.

Reference

An introduction to Statistical Learning: James, Witten, Hastie, Tibshirani, Taylor

The Log-Rank Test for Survival Read More »

Kaplan-Meier Estimate

The outcome variable in survival analysis is the time until an event occurs. Since studies are often time-bounded, some patients may survive the event at the end of the study, and others may stop responding to the survey midway through. In either case, those patients’ survival times are censored. As censored patients also provide valuable data, the analyst gets into a dilemma of whether to discard those candidates.

Let’s examine five patients in a study. The filled circles represent the completion of the event (e.g., death), and the open circles represent the censoring (either dropping out or surviving the study’s end date).

The survival function, S(t), is the probability that the true survival time (T) exceeds some fixed number t.
S(t) = P(T > t)
S(t) decreases with time (t) as the probability decreases as time passes.

In the above example, how do you conclude the probability of surviving 300 days, S(300)? Will it be 1/3 = 0.33 (only the one survived out of three events, ignoring the censored) or 3/5 = 0.6 (assuming the censored candidates also survived)? What difference does it make to the conclusion that one of them dropped out early when she was too sick?

Kaplan and Meier came up with a smart solution to this. Note that they worked on this problem separately. Their survival curve is made the following way.
1) The first event happened at time 100. The probability of survival at t = 100 is 4/5, noting that four of the five patients were known to have survived that stage.

2) We now proceed to the next event, patient 3. Note that we skipped the censored time of patient 2.

Now, two out of three survived. The overall survival probability at t = 200 is (4/5) x (2/3).

3) Move to the last event (patient 5); the survival function is zero ((4/5) x (2/3) x 0). This leads to the Kaplan -Meier plot:

Kaplan-Meier Estimate Read More »

Survival analysis – Sensoring

We have seen survival plots before. Survival plots represent ‘time to event’ in survival analysis. For example, in the case of cancer diagnostics, survival analysis measures the time it takes from exposure to the event, which is most likely death.

These analyses are done following a group of candidates (or patients) between two time periods, i.e., the start and end of the study. Candidates are enrolled at different times during the period, and the ‘time to event’ is noted down. Censoring is a term in survival analysis that denotes when the researcher does not know the exact time-to-event for an included observation.

Right censoring
The term is used when you know the person is still surviving at the end of the study period. Let x be the time since enrollment, and then all we know is the time-to-event ti > x. Imagine a study that started in 2010 and ended in 2020, and a person who was enrolled in 2018 was still alive at the study’s culmination. So we know that xi > 2 years. The same category applies to patients who missed out on follow-ups.

Left censoring
This happens in observational studies, where the risk happens before entering the studies. Because of this, the researcher cannot observe the time when the event occurred. Obviously, this can’t happen if the event is death.

Interval censoring
It occurs when the time until an event of interest is not known precisely and, instead, only is known to fall between two time stamps.

Survival analysis – Sensoring Read More »

Drug Development and the Valley of Death

One of the biggest beneficiaries of scientific methodology is evidence-based modern medicine. Each step in the ‘bench to bedside‘ process is a testimony of the scientific rigour in medicine research. While the low probability of success (PoS) at each stage is a challenge in the race to fight against diseases, it increases the confidence level in the validity of the final product.

The drug development process is divided into two parts: basic research and clinical research. Translational research is the bridge that connects the two parts. The ‘T Spectrum’ consists of 5 stages,

T0 includes preclinical and animal studies.
T1 is the phase 1 clinical trial for safety and proof of concept
T2 is the phase 2/3 clinical trial for efficacy and safety
T3 includes the phase 4 clinical trial towards clinical outcome and
T4 leads to approval for usage by communities.

Probability of success

According to a publication by Seyhan, who quotes NIH, 80 to 90% of research projects fail before the clinical stage. The following are typical rates of success in clinical drug development stages:
Phase 1 to Phase 2: 52%
Phase 2 to Phase 3: 28.9%
Phase 3 to Phase 4: 57.8%
Phase 4 to approval: 90.6%
The data used to arrive at the above statistics was collected from 12,728 clinical and regulatory phase transitions of 9,704 development programs across 1,779 companies in the Biomedtracker database between 2011 and 2020.

The overall chance of success from lab to shop thus becomes:
0.1×0.52×0.289×0.578×0.906 = 0.008 or < 1%!

References

Seyhan, A. A.; Translational Medicine Communications, 2019, 4-18
Mohsa, R.C.; Greig, N. H.; Alzheimer’s & Dementia: Translational Research & Clinical Interventions 3, 2017, 651-657
Cummings, J.L.; Morstorf, T.; Zhong, K.; Alzheimer’s Research & Therapy, 2014, 6-37
Paul, S.M; Mytelka, D.S.; Dunwiddie, C. T.; Persinger, C. C.; Munos, B.H.; Lindborg, S.R.; Schacht, A. L, Nature Reviews – Drug Discovery, 2010, VOlume 9.
What is Translational Research?: UAMS

Drug Development and the Valley of Death Read More »

Mutually Exclusive vs Independent

Mutually exclusive events and independent events are two concepts commonly used in probability. From how they sound, these two concepts may appear similar to many people – excluding and independent. Here, we explore, from first principles, what they are and how they are related.

The definitions

Two events are mutually exclusive, if one happens, the other can not occur. Or, if the probability of their intersection is zero. If A and B are the two events,
P(A∩B) = 0

Turning left and turning right are mutually exclusive.
While flipping a coin, the occurrence of the head and the occurrence of the tail are mutually exclusive events.

If two events are independent, the probability of their intersection is equal to the product of the two probabilities.
P(A∩B) = P(A)P(B)
There is another definition which may be more intuitive to some. The occurrence of the other does not influence the probability of one event. The probability of A given B equals the probability of A.
P(A|B) = P(A)

The relationship

Consider two mutually exclusive events, which implies P(A∩B) = 0. They are independent, i.e., P(A)P(B) = 0, if only if P(A) or P(B) or both = 0. Suppose both probabilities are > 0, then P(A)P(B) > 0. They are not independent.
Therefore, if A and B are mutually exclusive with positive probabilities, then they are not independent.

Reference

Are mutually exclusive events independent?: jbstatistics

Mutually Exclusive vs Independent Read More »

Placebo, Double-Blind and Experimenter Bias

We have seen the placebo effect. It occurs when someone’s physical or mental condition improves after taking a placebo or ‘fake’ treatment. Placebos are crucial in clinical trials, often serving as effective means to screen out the noise, thereby contributing as the control group to compare the treatment results. Is providing a placebo good enough to remove the biases of a trial?

If the placebo group knows they received the ‘fake pill’, it will nullify its influence. So, the first step in helping the experiment is to hide the information that it is real or placebo from the participants who receive the treatment. This is a single-blind test.

More is needed to prevent what is known as the experimenter bias, also known as the observer expectancy effect. In the next level of refinement, the information is also hidden from the experimenter. This becomes a double-blind experiment. The National Cancer Institute defines it as:
A type of clinical trial in which neither the participants nor the researcher knows which treatment or intervention participants are receiving until the clinical trial is over.”

That means only a third party, who will help with the data analysis, will know the trial details, such as the allocation of groups or the hypothesis. Double-blind studies form the gold standard in evidence-based medical science.

Double-blind study: National Cancer Institute

Placebo, Double-Blind and Experimenter Bias Read More »

Population Distributions vs Sampling Distribution

The purpose of sampling is to determine the behaviour of the population. For the definitions of terms, sample and population, see an earlier post. In a nutshell, population is everything, and a sample is a selected subset.

Population distribution

It is a frequency distribution of a feature in the entire population. Imagine a feature (height, weight, rainfall, etc.) of a population with a mean of 100 and a standard deviation of 25; the distribution may look like the following. It is estimated by measuring every individual in the population.

It means many individuals have the feature closer to 100 units and fewer have it at 90 (and 110). Still fewer have 80 (and 120), and very few exceptionals may even have 50 (and 150), etc. Finally, the shape of the curve may not be a perfect bell curve like the above.

Sampling distribution

Here, we take a random sample of size n = 25. Measure the feature of those 25 samples and calculate the mean. It is unlikely to be exactly 100, but something higher or lower. Now, repeat the process for another 25 random samples and compute the mean. Make several such means and plot the histogram. This is the sampling distribution. If the number of means is large enough, the distribution will take a bell curve shape, thanks to the central limit theorem.

In the case of the sampling distribution, the mean is equal to the mean of the original population distribution from which the samples were taken. However, the sampling distribution has a smaller spread. This is because the averages have lower variations than the individual observations.

standard deviation of sampling distribution = standard deviation of population distribution/sqrt(n). The quantity is also called the standard error.

Population Distributions vs Sampling Distribution Read More »

Descriptive statistics and R

Descriptive statistics summarize various parameters of a dataset. They can be measures of central tendency (e.g., mean, mode, median) or measures of variability (e.g., standard deviation, variance). Here is an R function that describes many of them in one command.

We will use ‘iris’ dataset to illustrate.

dat <- iris
library(pastecs)
stat.desc(dat[1:4], norm = TRUE))

Note that we have rounded off the output to include two digits after the decimal point.

round(stat.desc(dat[1:4], norm = TRUE), 2)

Descriptive statistics and R Read More »

The Central Limit and Hypothesis Testing

The validity of newly explored data from the perspective of the existing population is the foundation of the hypothesis test. The most prominent hypothesis test methods—the Z-test and t-test—use the central limit theorem. The theorem prescribes a normal distribution for key sample statistics, e.g., average, with a spread defined by its standard error. In other words, knowing the population’s mean, standard deviation and the number of observations, one first builds the normal distribution. Here is one example.

The average rainfall in August for a region is 80 mm, with a standard deviation of 25 mm. What is the probability of observing rainfall in excess of 84 mm this August as an average of 100 samples from the region?

The central limit theorem dictates the distribution to be a normal distribution with mean = 80 and standard deviation = 25/sqrt(100) = 2.5.

Mark the point corresponds to 84; the required probability is the area under the curve above X = 84 (the shaded region below).

The function, ‘pnormGC’, from the package ‘tigerstats’ can do the job for you in R.

library(tigerstats)
pnormGC(84, region="above", mean=80, sd=2.5,graph=TRUE)

The traditional way is to calculate the Z statistics and determine the probability from the lookup table.

P(Z > [84-80]/2.5) = P(Z > 1.6) 
1 - 0.9452 = 0.0548

Well, you can also use the R command instead of searching in the lookup table.

1 - pnorm(1.6)

The Central Limit and Hypothesis Testing Read More »