The 5 Thesis Statistics Mistakes That Cost Students Their Grade (And How to Catch Them Before Your Defense)
5 min read
Most thesis statistics mistakes are not random - the same five errors appear in thesis after thesis. They are not caught because they look correct in SPSS output. This guide identifies each one, explains why it happens, and gives you a pre-submission checklist to catch them before your supervisor does.
Key takeaways
- Treating Likert items as metric is the #1 error - single items are ordinal; use Mann-Whitney U, not t-test.
- Skipping assumption checks is the #2 error - Shapiro-Wilk and Levene's must be run and reported before every parametric test.
- Statistical significance ≠ practical importance - always report Cohen's d, η², or r alongside every p-value.
- Never report only significant hypotheses - selective reporting is a methodological flaw, not a writing choice.
- p = .000 is impossible - always write p < .001 when SPSS shows zero.
Mistake 1: Treating Ordinal Likert Scale Data as Metric (CRITICAL)
Single Likert items (e.g., 1–5 satisfaction rating) are ordinal data. The distance between 1 and 2 is not guaranteed to equal the distance between 4 and 5. Running a t-test or ANOVA on single ordinal items violates the interval scale assumption.
SPSS will not warn you. It will produce a t-statistic and p-value even though the test is not valid for ordinal data.
| Data Type | Correct Test for 2 Groups | Correct Test for 3+ Groups |
|---|---|---|
| Single Likert item (ordinal) | Mann-Whitney U | Kruskal-Wallis |
| Composite score (5+ items, α ≥ .70) | t-test (defensible) | ANOVA (defensible) |
CRITICAL: Check your SPSS Variable View. If any Likert item is set to 'Scale' measurement level, fix it to 'Ordinal'. This one setting error can invalidate your entire analysis.
Mistake 2: Skipping Assumption Checks Before Parametric Tests (CRITICAL)
Running a t-test or ANOVA without checking normality and equal variances is a fundamental methodological error. Your supervisor will ask for these results. 'I ran the test' is not sufficient - you must show the tests are valid.
Fix: run Shapiro-Wilk (normality) and Levene's (equal variances) first. Report both. State which result justified your choice of test.
CRITICAL: No assumption checks = no valid interpretation. If you cannot show your data met the parametric assumptions, your test results cannot be trusted.
Mistake 3: Confusing Statistical Significance With Practical Importance (HIGH)
p < .05 means your result is unlikely by chance. It says nothing about whether the effect matters.
With N = 500, almost any difference becomes statistically significant - including differences too small to matter in practice. With N = 20, a large real effect may not reach significance.
Effect size is the measure of practical importance. Always report it.
| Test | Effect Size Measure | Small / Medium / Large |
|---|---|---|
| t-test | Cohen's d | 0.2 / 0.5 / 0.8 |
| ANOVA | η² (eta-squared) | .01 / .06 / .14 |
| Correlation | r | .10 / .30 / .50 |
| Regression | R² | % variance explained |
Mistake 4: Reporting Only Significant Results - Selective Reporting (HIGH)
Reporting only the hypotheses that turned out significant and omitting non-significant ones is a form of selective reporting. It is not just a writing style choice - it is a methodological flaw that distorts the picture of what your data actually showed.
- All hypotheses must be reported with their full statistics:
- t(48) = 0.87, p = .389, d = 0.25
- Write: 'No significant difference was found between groups, t(48) = 0.87, p = .389, d = 0.25.'
- Do NOT write: 'This hypothesis was not supported' and omit the statistics.
Selective reporting can be flagged during your defense as a methodological weakness. Report every hypothesis you tested with its full statistics - significant or not.
Mistake 5: Writing p = .000 in Your Thesis (MEDIUM)
When SPSS shows p = .000, it means the p-value is smaller than .0005 and has been rounded to three decimal places. A true p-value of zero is mathematically impossible.
Always write p < .001 - never p = .000.
This is an easy fix but appears in almost every first thesis draft.
Pre-Submission Checklist: Catch All 5 Mistakes
Run through this checklist before submitting to your supervisor.
| Check | What to Verify | Status |
|---|---|---|
| Likert measurement level | SPSS Variable View: Likert items = Ordinal (not Scale) | ☐ |
| Assumption checks reported | Shapiro-Wilk W and p, Levene's F and p in Methods section | ☐ |
| Effect size for every test | Cohen's d, η², or r reported next to every p-value | ☐ |
| All hypotheses reported | Non-significant results include full t/F/r, p, and effect size | ☐ |
| No p = .000 | Search thesis for '.000' - replace all with '< .001' | ☐ |
Frequently asked questions
Is it acceptable to use a t-test on Likert scale data?
▾
My Shapiro-Wilk test is significant - what should I do?
▾
How do I know if my effect size is large enough to matter?
▾
What is the most important assumption to check before running a t-test?
▾
How do I write up a non-significant result in my thesis?
▾
Further reading
Thesis Data Analysis: The 5 Critical Steps Students Skip (With Checklist)
· Data analysisWhich Statistical Test to Use for Your Thesis: A Complete Decision Guide
· Test selectionHow to Write the Statistics Results Section of Your Thesis
· APA reportingAPA Statistics Reporting: Copy-Paste Templates for Every Test in Your Thesis
· APA reporting
Free tool
Not sure which statistical test to use?
Answer 5 quick questions about your research design and get the right test - with an explanation of why - in under two minutes.
Statoria Team
Statistics educators & software developers
We build Statoria to help bachelor and master students get through their thesis data analysis without stress. Our guides are written by researchers with experience in social science statistics and student supervision.
Related guides

Thesis Data Analysis: The 5 Critical Steps Students Skip (With Checklist)
Mar 2026 · 3 min read


