**Power** is the probability of detecting an effect, given that the effect is really there. For instance, if a study comparing the two groups (control vs. disease) has power of 0.8 and assuming we can conduct the study many times, then 80% of the time, we would get a statistically significant difference between the two groups, while 20% of time we run this experiment, we will not obtain a statistically significant effect, even though there really is an effect in reality.

There are three major factors:

- Effect size which is usually defined as the difference of two group means divided by the pooled standard deviation. When all others are equal, a larger the effect size will lead to more power;
- Degree of confidence which is usually the p value cutoff (alpha) for statistical significance. When all others are equal, there will be reduced power if we require a very high degree of confidence;
- The sample size - more samples will in general increase power. In many cases, the sample size is our interest for a given power (i.e. 0.8).

In practice, researchers are most interested in knowing the sample size (number of subjects) required in order to obtain sufficient power.