Loading...
Loading...
Calculate p-value from z-score for hypothesis testing
Ronald Fisher popularized p < 0.05 as the significance threshold in the 1920s, but it was meant as a rough guide, not a rigid rule.
Many published studies with p < 0.05 have failed to replicate, leading to reforms in how scientists interpret p-values.
P-value is NOT the probability that your hypothesis is true. It's the probability of seeing your data if the null hypothesis were true.
A tiny p-value doesn't mean a big effect. With large samples, even trivial differences can be 'statistically significant'.
If you run 20 tests at p < 0.05, expect 1 false positive by chance alone. Bonferroni correction can help.
Many statisticians prefer confidence intervals over p-values because they show both significance AND effect size.
The Higgs boson discovery required p < 0.0000003 (5 sigma) to claim discovery, far stricter than most sciences.
Bayesian statistics offer an alternative approach, directly calculating the probability of hypotheses given the data.
One-Tailed P-Value
P(Z > |z|) = 1 - CDF(|z|)
Two-Tailed P-Value
2 x P(Z > |z|)
Significance Level (alpha)
Commonly 0.05 or 0.01
Decision Rule
Reject H0 if p < alpha