P-Value Explained: What p < .05 Really Means, Common Myths, and How to Report It Correctly in APA
6 min read
The p-value is the most reported and most misunderstood number in thesis statistics. Most students can state that p < .05 means significant - but cannot explain what it actually means when a supervisor asks. This guide gives you the correct definition, debunks the five most common myths written in student theses, and shows the exact APA reporting rules including the one formatting error that appears in almost every first draft.
Key takeaways
- The correct definition: the probability of observing a result this extreme IF the null hypothesis were true - not the probability the hypothesis is correct.
- p < .05 does NOT mean the effect is large, important, or will replicate - it only measures surprise under the null.
- Non-significant results must be reported in full: include the test statistic, exact p-value, and effect size - do not omit them.
- Never write p = .000 - report p < .001 instead (SPSS rounds to three decimal places; a p-value of zero is impossible).
- Effect size is always required alongside p - without Cohen's d, η², or r, a significant result is statistically incomplete.
The Correct Definition of a P-Value
The p-value is the probability of observing a result as extreme as - or more extreme than - your data, assuming the null hypothesis is true.
In plain language: if there were truly no effect in the population, how likely would you be to get a result like yours by chance alone?
- A small p (e.g., p = .02): if there were no real effect, you would get a result this extreme only 2% of the time by chance - reason to doubt the null.
- A large p (e.g., p = .48): a result this extreme would occur 48% of the time even with no effect - nothing unusual here.
| Term | Correct Interpretation | Common Wrong Version |
|---|---|---|
| p < .05 | Sufficient evidence to reject the null at α = .05 | The hypothesis is true with 95% probability |
| p = .03 | 3% chance of this result if null is true | Only a 3% chance the result is a fluke |
| p > .05 | Insufficient evidence to reject the null | There is no effect |
| p < .001 | < 0.1% chance under null - very surprising result | The effect is very large |
5 P-Value Myths Thesis Students Write - And the Truths
These are the most common p-value misinterpretations flagged in thesis reviews.
| Myth | Truth |
|---|---|
| p < .05 means the effect is important | Importance = effect size (Cohen's d, η², r), not p |
| p > .05 means there is no effect | It means insufficient evidence to reject null - not proof of no effect |
| p = .001 is 'more significant' than p = .04 | Both reject H₀; p-values do not rank 'how significant' a result is |
| A significant result will replicate | Small samples with p close to .05 often fail to replicate |
| 1 − p = probability that H₁ is true | p is not about hypothesis probability - it is about data under H₀ |
How to Report Non-Significant Results in Your Thesis
A non-significant result does NOT mean there is no effect. It means your study did not find sufficient evidence to reject the null. Report non-significant results with exactly the same detail as significant ones.
Correct: "No significant difference was found between the groups, t(48) = 0.87, p = .389, d = 0.25."
Do NOT write "the hypothesis was rejected" - you failed to reject the null hypothesis, which is different.
- Possible reasons for p > .05:
- - Sample was too small to detect the effect (low power)
- - The null hypothesis is actually true
- - Measurement noise obscured the effect
Omitting non-significant results from your thesis is selective reporting - a methodological flaw. Every hypothesis must be reported with its full statistics, whether significant or not.
APA Format Rules for P-Values
Report exact p-values to three decimal places: p = .032, p = .007, p = .428
For very small values: p < .001 - NEVER write p = .000 (see callout)
Do not add a leading zero: write p = .05 not p = 0.05
Do not write p > .05 or p = NS - always report the exact value
Italicise the p in APA format
- Correct: "The effect was significant, F(2, 87) = 8.42, p < .001, η² = .16."
- Incorrect: "p value was .000" or "n.s." or "p > 0.05"
If your SPSS output shows p = .000, do NOT copy it into your thesis. Report it as p < .001. SPSS truncates p-values to three decimal places - a true p of zero is mathematically impossible.
Effect Size Is Always Required Alongside P-Value
A p-value tells you whether a result is surprising under the null. An effect size tells you how large the effect actually is. Both are required in APA reporting.
With a large sample, even a trivially small difference produces p < .05. With a small sample, a clinically important effect may produce p > .05. The p-value alone cannot distinguish these cases.
- Effect size measures by test type:
- - t-test → Cohen's d (small = 0.2, medium = 0.5, large = 0.8)
- - ANOVA → η² or ηp² (small = .01, medium = .06, large = .14)
- - Correlation → r (small = .10, medium = .30, large = .50)
Always report effect size. A significant result without it is statistically incomplete.
Frequently asked questions
What does p < .001 mean in simple terms?
▾
Is p = .049 more significant than p = .032?
▾
What should I do if my p-value is exactly .05?
▾
My SPSS output shows p = .000 - can I report that?
▾
My result is non-significant - does that mean my hypothesis was wrong?
▾
What is the difference between one-tailed and two-tailed p-values?
▾
Further reading
Which Statistical Test to Use for Your Thesis: A Complete Decision Guide
· Test selectionAPA Statistics Reporting: Copy-Paste Templates for Every Test in Your Thesis
· APA reportingConfidence Interval Explained: What It Means and How to Report It in APA
· Statistics fundamentalsT-Test for Your Thesis: Complete Guide with Assumption Checks, Effect Size, and APA Copy-Paste Templates
· Statistical tests
Free tool
Not sure which statistical test to use?
Answer 5 quick questions about your research design and get the right test - with an explanation of why - in under two minutes.
Statoria Team
Statistics educators & software developers
We build Statoria to help bachelor and master students get through their thesis data analysis without stress. Our guides are written by researchers with experience in social science statistics and student supervision.
Related guides


