Statoria Brand LogoStatoria
Statistics fundamentals

P-Value Explained: What p < .05 Really Means, Common Myths, and How to Report It Correctly in APA

6 min read

The p-value is the most reported and most misunderstood number in thesis statistics. Most students can state that p < .05 means significant - but cannot explain what it actually means when a supervisor asks. This guide gives you the correct definition, debunks the five most common myths written in student theses, and shows the exact APA reporting rules including the one formatting error that appears in almost every first draft.

Free sample chapter

Data Analysis From Survey to Results

Step-by-step guidance for choosing the right test, running it, and writing up APA results - in plain language, not theory. Get the free sample chapter when you join the waitlist.

Key takeaways

  • The correct definition: the probability of observing a result this extreme IF the null hypothesis were true - not the probability the hypothesis is correct.
  • p < .05 does NOT mean the effect is large, important, or will replicate - it only measures surprise under the null.
  • Non-significant results must be reported in full: include the test statistic, exact p-value, and effect size - do not omit them.
  • Never write p = .000 - report p < .001 instead (SPSS rounds to three decimal places; a p-value of zero is impossible).
  • Effect size is always required alongside p - without Cohen's d, η², or r, a significant result is statistically incomplete.

The Correct Definition of a P-Value

The p-value is the probability of observing a result as extreme as - or more extreme than - your data, assuming the null hypothesis is true.

In plain language: if there were truly no effect in the population, how likely would you be to get a result like yours by chance alone?

  • A small p (e.g., p = .02): if there were no real effect, you would get a result this extreme only 2% of the time by chance - reason to doubt the null.
  • A large p (e.g., p = .48): a result this extreme would occur 48% of the time even with no effect - nothing unusual here.
TermCorrect InterpretationCommon Wrong Version
p < .05Sufficient evidence to reject the null at α = .05The hypothesis is true with 95% probability
p = .033% chance of this result if null is trueOnly a 3% chance the result is a fluke
p > .05Insufficient evidence to reject the nullThere is no effect
p < .001< 0.1% chance under null - very surprising resultThe effect is very large

5 P-Value Myths Thesis Students Write - And the Truths

These are the most common p-value misinterpretations flagged in thesis reviews.

MythTruth
p < .05 means the effect is importantImportance = effect size (Cohen's d, η², r), not p
p > .05 means there is no effectIt means insufficient evidence to reject null - not proof of no effect
p = .001 is 'more significant' than p = .04Both reject H₀; p-values do not rank 'how significant' a result is
A significant result will replicateSmall samples with p close to .05 often fail to replicate
1 − p = probability that H₁ is truep is not about hypothesis probability - it is about data under H₀

How to Report Non-Significant Results in Your Thesis

A non-significant result does NOT mean there is no effect. It means your study did not find sufficient evidence to reject the null. Report non-significant results with exactly the same detail as significant ones.

Correct: "No significant difference was found between the groups, t(48) = 0.87, p = .389, d = 0.25."

Do NOT write "the hypothesis was rejected" - you failed to reject the null hypothesis, which is different.

  • Possible reasons for p > .05:
  • - Sample was too small to detect the effect (low power)
  • - The null hypothesis is actually true
  • - Measurement noise obscured the effect
⚠️

Omitting non-significant results from your thesis is selective reporting - a methodological flaw. Every hypothesis must be reported with its full statistics, whether significant or not.

APA Format Rules for P-Values

Report exact p-values to three decimal places: p = .032, p = .007, p = .428

For very small values: p < .001 - NEVER write p = .000 (see callout)

Do not add a leading zero: write p = .05 not p = 0.05

Do not write p > .05 or p = NS - always report the exact value

Italicise the p in APA format

  • Correct: "The effect was significant, F(2, 87) = 8.42, p < .001, η² = .16."
  • Incorrect: "p value was .000" or "n.s." or "p > 0.05"
⚠️

If your SPSS output shows p = .000, do NOT copy it into your thesis. Report it as p < .001. SPSS truncates p-values to three decimal places - a true p of zero is mathematically impossible.

Effect Size Is Always Required Alongside P-Value

A p-value tells you whether a result is surprising under the null. An effect size tells you how large the effect actually is. Both are required in APA reporting.

With a large sample, even a trivially small difference produces p < .05. With a small sample, a clinically important effect may produce p > .05. The p-value alone cannot distinguish these cases.

  • Effect size measures by test type:
  • - t-test → Cohen's d (small = 0.2, medium = 0.5, large = 0.8)
  • - ANOVA → η² or ηp² (small = .01, medium = .06, large = .14)
  • - Correlation → r (small = .10, medium = .30, large = .50)

Always report effect size. A significant result without it is statistically incomplete.

Frequently asked questions

What does p < .001 mean in simple terms?

It means the probability of observing a result this extreme by chance, if the null hypothesis were true, is less than 0.1% (one in a thousand). It is a very strong statistical result, but remember it does not tell you about the size or practical importance of the effect - report effect size alongside it.

Is p = .049 more significant than p = .032?

No. Both are below .05 and both lead to the same decision: reject the null hypothesis. "Significance" in statistics is binary at any given threshold - either you reject or you do not. The exact p-value matters for reporting but does not indicate "more" or "less" significance within the same decision outcome.

What should I do if my p-value is exactly .05?

A p-value of exactly .05 is on the boundary. Conventionally, p ≤ .05 is considered significant, so p = .05 technically meets the threshold. However, discuss this in your results: note that the result is marginal and that conclusions should be interpreted cautiously. Do not simply say it "is not significant" or "is significant" without acknowledging the boundary nature of the result.

My SPSS output shows p = .000 - can I report that?

No. SPSS displays .000 when the p-value is smaller than .0005 (it rounds to three decimal places). You must report this as p < .001, never as p = .000 or p = 0. A p-value of zero is mathematically impossible.

My result is non-significant - does that mean my hypothesis was wrong?

Not necessarily. A non-significant result means you did not find sufficient evidence to reject the null hypothesis - not that the null hypothesis is true. Possible reasons: the effect doesn't exist, your sample was too small to detect it (low statistical power), or measurement error obscured a real effect. Report the result with full statistics, discuss what it might mean, and consider whether your sample size was adequate.

What is the difference between one-tailed and two-tailed p-values?

A two-tailed test checks for an effect in either direction (e.g., Group A ≠ Group B). A one-tailed test checks for an effect in one specific direction (e.g., Group A > Group B). One-tailed tests are more powerful but require a strong directional hypothesis stated before data collection. Most thesis research uses two-tailed tests. Always specify which you used in your methods section.

Free tool

Not sure which statistical test to use?

Answer 5 quick questions about your research design and get the right test - with an explanation of why - in under two minutes.

Statoria Team

Statistics educators & software developers

We build Statoria to help bachelor and master students get through their thesis data analysis without stress. Our guides are written by researchers with experience in social science statistics and student supervision.

Related guides