Bayesian statistics: Why “p < 0.05" Might Be Misleading: The Beautiful Power of Bayesian Thinking

Research, Science communication

For years, the phrase “statistically significant” has been science’s gold stamp of approval. If your test came back with a p-value of less than 0.05, you were set. Publishable. “Proven.”.

But over time, scientists, statisticians, and data scientists have begun to ask: Is this threshold propelling science forward, or backward?

Along came Bayesian statistics — a paradigm shift that introduces nuance, context, and flexibility into the interpretation of data.

The Issue with p < 0.05

Let’s begin by deconstructing the old guard: the frequentist paradigm, null hypothesis significance testing (NHST) based. Within this paradigm:

You start with a null hypothesis (e.g., “This new drug has no effect.”)

You collect information and compute a p-value

If p < 0.05, reject the null and report the result “significant”.

Yet even this dualism has some far-reaching flaws:

  1. Arbitrary Thresholds: The 0.05 threshold is, traditionally, a convention — not a science-derived constant. 0.049 is “significant,” but 0.051 is not? That’s a razor-thin and usually deceptive distinction.
  2. Publication Bias & P-Hacking: Researchers will conduct several analyses, adjust models, or exclude pesky data simply to achieve the p-value less than 0.05. Even worse, journals will publish “non-significant” findings less often, warping the scientific record.
  3. Lack of content: The p-value tells you nothing about how big the effect size is, how likely you are to be right, or how prior knowledge is involved. It’s a yes or no system in a world that demands shades of gray.

IBSAglobal Astronomy Publishing

The big bang Beginning of the Universe

The Bayesian Alternative: Probabilistic Thinking

While classical statistics inquires, “If the null hypothesis is true, how likely is this data?”, Bayesian statistics reverses the question:

“With this knowledge, how probable is the hypothesis? That is a subtle but influential change. Bayesian techniques are grounded in Bayes’ Theorem, which refines the probability of a hypothesis as more evidence is obtained.

Bayes' Theorem in a Nutshell:

Bayesian statistics

Where:

P(H|D) = posterior probability (probability of hypothesis H given data D)

P(D|H) is the chance (probability of seeing the data if H is true)

P(H) = prior (your initial belief in H)

P(D) is the probability of the data under all the hypotheses

Rather than giving you a yes/no response, Bayesian statistics give you a probability distribution: a more natural, richer representation of uncertainty.

A Real-Life Example: Drug Effectiveness

Suppose you’re trying out a new flu medication.

A frequentist test gives you p = 0.04. Success?

But suppose your sample had been small. Or your historical data showed small effect?

A Bayesian solution would then meld the prior information (e.g., analogous drugs have a weak impact) with the new information to calculate the new probability the drug is effective — not merely whether to “reject” it.

Instead of:
“Significant at p < 0.05”

You get:
“There’s a 74% chance the drug has an effect bigger than 20%”

Which one would you rather believe as a physician? As a patient?

The Advantages of Bayesian statistics & Reasoning

  1. Updates with New Evidence: Science moves forward — and so should your models. Bayesian methods allow you to update results as new data come in, rather than having to start over each time.
  2. Measure Uncertainty: Instead of binary outcomes, Bayesian statistics give full probability distributions, giving a more honest depiction of doubt and confidence.
  3. More Robust to Small Samples: Where frequentist tests struggle with small sample sizes, Bayesian inference can make reasonable estimates, especially with informed priors.

But isn't it Subjective?

Bayesian statistics are said by opponents to be “subjective” because they depend on priors. But in fact:

You may try out different priors to see their effect

Priors may be non-informative (i.e., neutral) when necessary

Transparency in priors is actually a plus, not a minus

Subjectivity is already inherent in research — from choosing models to cleaning data. Bayesian statistics simply make those assumptions explicit.

Where Bayesian Statistics Are Already Winning

  1. Machine learning & AI: Bayesian models drive spam filters, recommendation systems, and medical diagnostics
  2. Epidemiology: Applied during COVID-19 to predict rates of infection and measure treatment effectiveness
  3. Astronomy & Physics: Where uncertainties are huge and prior knowledge is essential

Conclusion: The Future is Bayesian

Science transcends black-and-white thinking. Bayesian statistics makes us embrace uncertainty, act reasonably, and present results with more meaning.

It’s not just a statistical technique — it’s a more intelligent way of thinking.

So, whenever you notice p < 0.05, you can ask yourself: Is this really the whole story? Or do we start to think Bayesian statistics?

Leave a Reply

Your email address will not be published. Required fields are marked *