⚠️ Small Sample Alert

Small studies can be misleading! Learn why sample size matters for reliable conclusions.

🎛️ Study Parameters

Sample Size (n)
?
30
Effect Size (Cohen's d)
?
0.5
Significance Level (α)
?
0.05

📰 Common Study Scenarios

📈 Statistical Power

68%
Chance of detecting true effect
Moderate Power
Power should be ≥ 80%

🎯 Margin of Error

±7.3%
95% Confidence Interval
Moderate Precision
Smaller is better

Sample Size vs. Statistical Power

🔍 What This Means

With a sample size of 30 and medium effect size, your study has 68% power to detect a true effect. This means there's a 32% chance you'll miss a real finding (Type II error). Consider increasing your sample size to reach 80% power.

📚 Real-World Examples of Sample Size Issues

Miracle Cure Study: "New supplement cures headaches!" (n=8) - Too small to trust
Psychology Finding: "Therapy reduces anxiety by 50%" (n=15) - Needs replication
Education Innovation: "New teaching method improves scores" (n=25) - Underpowered
Medical Treatment: "Drug shows promise in trial" (n=500) - More reliable

🧠 Key Concepts to Remember

Statistical Power

The probability of detecting an effect when it truly exists. Higher power means less chance of missing real findings.

Type I Error (α)

False positive - concluding there's an effect when there isn't. Usually set at 5% (p < 0.05).

Type II Error (β)

False negative - missing a real effect. Power = 1 - β. We want β to be low (power high).

Effect Size

How big the difference is. Large effects are easier to detect with smaller samples than small effects.