Jan 16, 2023

March 29, 2021

It seems there has been a real backlash against evidence based practice (EBP) of late and part of the issue, I believe, is that there is sometimes a very real MISUNDERSTANDING about what evidence based practice is and is not.

This backlash in part seems to revolve around the idea that EBP is too restrictive and hasn’t got all the answers and that some folk CAN place too much emphasis on ‘evidence’ in their practice, perhaps not with enough critical appraisal leading to rigid and inflexible perspectives.  We do, in my opinion, have to acknowledge that an overly empirical perspective can be as problematic as simply rejecting EBP because it does not provide all the answers or is not correct 100% of the time.

So perhaps a better understanding of EBP is needed? Something being ‘evidence based’ does not create a certainty about what will and will not ‘work’. It’s not a rigid protocol that produces individually consistent results. It is a way of making informed decisions based on a scientific process rather than just someone’s opinion or experiences.

What has become clear is the binary and tribal way that such topics, in this case EBP, are approached in the therapist space. Are you an evidence-based therapist? Are you a manual therapist? Are you an exercise therapist? Are you a pain science therapist? They seem to have become labels that are used to generalise and berate others with.


Maybe, just maybe, this discussion is not really about EBP but also more about how EBP is USED by people? It is a pretty blunt tool if it is not used as Sackett suggested originally in a “judicious” way. EBP is a lot like the BioPsychoSocial model in that it’s much more of a philosophy, a way of thinking, than a step-by-step method to follow.

Both EBP & BPS are far more conceptual and broader than traditional clinical methods/models which is probably both a blessing and a curse and often a criticism is that they do not provide clear clinical application. The biggest flaw I see in how both EBP & BPS approaches are used can be the choosing of one of the domains to justify clinical decision making. The 3 areas of EBP, being research data, clinical experience and patient preference are to be used TOGETHER rather than being trichotomized to support or justify clinical decisions. Housman pointed out this in the use of statistics in his famous quote:

“Some individuals use statistics as a drunk man uses lamp-posts — for support rather than for illumination”

A great example of this bastardisation of EBP is the use of patient preference to satisfy the criteria of EBP. Patient preference is not simply about which intervention someone should receive. There are many decisions beyond intervention that a person could need to be involved in. Maybe a better term would patient perspectives as this encompasses a much wider view of the therapeutic process rather than just “they wanted acupuncture (just an example), so I gave it to them” and this satisfying an EBP requirement and therefore is a justification for it being used.


As someone biased towards EBP its important to confront the problem, issues and perhaps misconceptions that exist with regards to EBP:


Evidence can often be unclear and conflicting, it does not give a clear un fallible pathway to clinical success. This needs to be accepted as part of the process of using research evidence. Unfortunately, this can also be a reason used by some to reject EBP.


The idea that because it says something in the conclusion of a paper that it magically gets propelled into concrete truth beyond reproach or critique is probably a major flaw in the way EBP is used. This can lead to therapists trading pub med abstracts on various social media platforms, sometimes (fuck it, many times) without the paper even being read. Equally though when it does not fit our biases out comes the fine-tooth comb to find a problem : )


Clinicians perhaps want more from EBP than it can actually currently provide such as really big questions being answered definitively by a single paper. A popular example is “Does exercise work better than manual therapy”. That question has never, ever been asked (as much as we might want it too : ) because it is way too broad. You have to define the condition you are looking for it to “work” on, how you measure ‘working’, the population you are studying and the way in which the exercise or manual therapy is performed etc etc.


Another issue is this idea of what “works” as a concept. This may stem from the idea of accepting or rejecting a hypothesis such as in a Frequentist approach. Simply put two binary options, so something works or it does not work by accepting or rejecting a hypothesis.

P values have often been used to make these decisions, although thankfully this is being moved away from, they are not really fit for that purpose of making such decisions. P values tell us about the correctness of the statistical model rather than the correctness of a hypothesis. The stats are only as good as the methods used to generate them and why methodology is a big factor in the conclusions taken from a paper.


Patient narratives are also a really important part of the evidence we should use to make decisions. Yes this is not double blinded and randomised but also the experiences of THIS person that needs our help. Patient narratives are also far more than just what treatment they received and how successful it was which are often used to point out the unreliability of someone’s narrative


The evidence base around a subject can be vast, take back pain for example. So this needs to be considered rather than just a favourite paper that supports a bias. My paper beats your paper is like a game of top trumps and not really how EBP is supposed to work.


Before we start accepting or rejecting EBP maybe we should formulate our own idea of what it is and what it tells us. What is our personal approach or philosophy in this area? Perhaps too often personal philosophies on this and other subjects are influenced by other people’s rather than taking the time to formulate our own?

What’s my view? Well, EBP does not give us a cast iron answer for the patient in front of us. It does not predict precisely what is going to happen in 2, 6 or 12 weeks and it often does not tell us precisely why something has happened, there are so many things not being controlled for or measured. But it can help us understand probabilities and estimates around a question at a broader population level in a less biased way. It should afford me an estimate or a parameter of what is most likely to happen, provided that there has been sampling reflective of my patient and appropriate methods used.

This is exactly why statisticians appear to be moving away from fisher style hypothesis testing to estimates of effect such as a greater emphasis on confidence intervals. It does also help us control for some of the natural biases that go into making us humans! Things like randomisation & blinding are positives although they can be applied in a very blunt way as criticism of research methods.

Just because EBP is not perfect or provide all the answers does not mean it should simply  be rejected. That is exactly the binary approach that has led us to this point and to accept or reject EBP is not the answer. Imagine if we were not in a position to test methods and interventions? It would be like the wild west of rehab with machines that go bing everywhere. It comes back to the judicious use of evidence that involves an understanding of what EBP is and the current best data on the subject being questioned out there. Evidence may often not tell us exactly what to do, but its value might also lie in telling us what NOT to do and I think there is HUGE value in this.


So the research base gives me a jumping off point and a way to narrow down my decision making, by simply rejecting research it can be replaced by a heap of other shit that certainly is not optimal healthcare. It doesn’t give me all the answers, but as I understand EBP it’s not meant to.

We need to see therapy as much about informed trial and error rather than a set in stone process predicted by a research paper. The research is the informed part and the application and outcome are often a little more fluid and the trial and error bit.


It’s the middle ground where the truth probably lies in this debate, to be too accepting or reliant on research & evidence and we miss the point of what research is. But the opposite end of dismissing research because it’s not perfect or something worked that had been ‘proven’ not to is not the way forward, I suspect this will actually take us backwards. Instead let’s come back to the judicious use of research fuelled by a better understanding of what it does and does not tell us.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras sed sapien quam. Sed dapibus est id enim facilisis, at posuere turpis adipiscing. Quisque sit amet dui dui.
Call To Action

Stay connected with news and updates!

Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.

We hate SPAM. We will never sell your information, for any reason.