Assessing and Integrating the Evidence

Are we teaching it backwards? Without understanding premise or argument validity in relation to research, an individual article analyses may be useless. A study may be flawed on premise alone even with strong methodology and statistically significant results. A valid argument is false. And, inappropriate conclusions will be drawn. Likely, this will lead to misguided justifications and explanations. Such errors can affect clinical practice, education, and future research.

Plenty of systems and check lists exists for systematically evaluating the quality of an individual study including it’s design and methodology. The PEDro scale  (PEDro Website), The Consort Statement25 Item ChecklistChecklist for Qualitative AnalysisPRISMA for Systematic Reviews and Meta-analyses, and other Critical Appraisal Checklists all guide and contribute to appropriate analysis. But, a critical review should not only critique the rigor of the inquiry and accuracy of the conclusions, but also, and maybe more importantly, assess the study’s plausibility in context of the whole of current scientific understanding.

What does this tell us? What can it tell us given the design? What doesn’t it tell us? Too often scientific research, clinical and otherwise, is interpreted too broadly and thus inaccurately. It’s one of the major flaws of popular “pop” science journalism: over reaching conclusions. The sensational headlines touting miracle cures and “bad” foods that cause cancer. But, even a narrow lens of assessment focused only the specific study may lead to improper understanding. Outside of a purely methodological critique, an article analysis can not, must not, be done in isolation.  The current state of the literature on the topic specifically, in conjunction with basic science generally, must be taken into account.

Is this plausible?

Plausibility must not only take into account previous clinical research and outcomes studies (efficacy and effectiveness), but also basic science and current mechanistic research. Such an approach prevents reinforcing an unlikely or inaccurate explanatory model despite positive outcomes. Unfortunately, physical therapy is likely plagued by positive outcome studies misinterpreted, and thus explicitly or implicitly, supporting a theoretical construct that is (may be) invalid. One example, more specifically, is the variance in explanatory models of manual therapy effect.

What are my beliefs? Biases? Preferred treatment constructs and approaches?

An overlooked area of assessment is ourselves. The person doing the analyzing. It’s imperative that the critical lens of analysis be pointed back upon its user. Rarely will an orthopedic manual physical therapist postulate that manual therapy does not work.  The very best may ponder if the mechanisms are completely outside the current understanding. A physical therapist practicing in an ICU rarely questions the effectiveness of movement and mobility. But, clinicians and researchers should strive to rigorously falsify via the scientific method in order to focus accuracy and understanding over time. Physical therapists are inherently, and understandably, focused on the specifics of treatment that appear most important. What exercise? What technique? What works? Yet, the scientific rigor, and uncomfortable thought, of attempting to prove physical therapy does not work will lead to more specific knowledge on why it does work and the potential attainable outcomes. Seems contradictory, but falsifiability is the basic tenant of hypothesis testing in science. So, ask yourself: what would it take to change my mind? It’s time for some serious critical thinking.

Points to Ponder

  • Hypothesis & Null Hypothesis
  • Plausibility of Hypothesis based on previous research and overall knowledge
  • Methods Critique (utilize checklists)
  • Efficacy vs. Effectiveness Design
  • What is the comparison or control group?
  • Are these groups similar in abstract variables such as frequency, duration, and one on one time?
  • Believability of the comparison or placebo by patient?
  • What the results can tell us given study design
  • What the results can NOT tell us given the study design
  • Plausibility of results from author’s interpretation
  • Plausibility of theoretical model presented or utilized
  • Plausibility of the discussion & conclusion  in relation to understanding on the topic specifically
  • Plausibility based on basic science, physics, mechanics, including tissue mechanics, physiology, psychology
  • How else could the results be explained? Placebo? Regression to the mean? Different mechanisms?
  • Did the authors make the appropriate conclusion?
  • What’s YOUR conclusion and understanding?
  • Overall summary and critique
  • How and why to integrate?

What is the take away?

“That’s valid,” you say, but what do you mean by that? A single statement can be valid by itself if it is a previously proven “truth”, but what about an argument? You remember arguments, right? Premise, premise, therefore conclusion? Funny thing about valid arguments, they have nothing to do directly with truth. Arguments can be valid and false at the same time, just as they can be invalid and true at the same time. What?

Since deductive arguments are the basis of all research, you need to understand this concept. I have quoted before on a podcast, “A flawed study is still a flawed study regardless of p-value or level of evidence. – Erik Meira, When a valid argument can be false

The complications continue. Concepts such as placebo, non-specific effects, nocebo, incentives, behavioral psychology, decision making, logical fallacies, cognitive biases, and epidemiology all play vital roles in not only which treatments we (should) utilize, but how they (may) work. In addition to clinical and scientific research, the understanding of the how’s and why’s of decisions in clinical practice rest upon these concepts (by jennifer). Given where trials of physical therapy interventions are published it’s imperative to read outside the physical therapy specific literature. Research in psychology and behavior assists in a deeper understanding of the importance of the entire treatment encounter in addition to how clinicians make decisions within a treatment encounter.

It is reflective and complex decision-making that integrates all sources of evidence that we should be having serious conversations about, and its that thoughtfulness [PDF] that is required of a doctoring profession – not the myopic and obtuse yes or no to the question: “Are you evidence based?” – Jason Silvernail, DPT, DSc

So, stop volleying RCTs back and forth in an evidence ping pong match, and begin integrating knowledge. The information from seemingly unrelated fields contain insights that can result in true evolution in our understanding of clinical practice. Surprisingly, even many of the randomized control trials of physical therapy interventions are not published in physical therapy specific journals. Ponder how  the “evidence base” should be selected.

We rarely believe we are ignorant, but could we be wrong?

7 Replies to “Assessing and Integrating the Evidence”

  1. Thank you for writing it, Kyle. Few talks about falsification. Here’s a quick PowerPoint presentation I found that talks about Popper’s falsification in more detail, using Sigmund Freud and Karl Marx as examples of what is not a scientific theory.

    It doesn’t mean that non-scientific theories are useless or pointless; it’s just that they aren’t scientific and shouldn’t be considered so.

    https://www.youtube.com/watch?v=cIXcg1rJn2U

  2. Interesting, however it seems to me you are taking a very universal / philosophical approach to what you see as a problem without any examples. Evidence based practice has always been about integrating the best available evidence for the patient you are treating to make the best clinical decision possible. To say evidence overrides clinical decision making is not accurate.

  3. Steve,

    Thanks for your participation. You are correct, I do not provide any specific concrete examples in this post. I think if we look to manual therapy paradigms, trendy treatment tools, and interventions that outlast their debunking we see many examples of where our clinical observations and integration of research has fallen short. Unfortunately, evidence based practice (as taught and utilized) likely OVER emphasizes the results of RCT’s at the expense of true plausibility. I’ve discussed the specifics of metacognition and scientific approach practice previously.

    I agree with your statement, and it is well said. EBP is about integration and application of research to the individual patient in front of us. This includes making the best possible clinical decisions. I think you’ve identified multiple constructs. The first is best available evidence. This alone is no easy task as I hope I’ve illustrated in the above post. The second is application of evidence and knowledge to the individual. Again, a complicated endeavor as it involves applying basic knowledge, physiology, and population based outcomes study to an individual person. Lastly, is clinical decision making which in and of itself, is a skill and mental skill set dependent on the previously outlined topics, but also it’s own entity. Clinical decision making requires a framework and the integration of knowledge and research on the specific topic of how we make decisions and how we can improve that ability.

    I’ve not, that I’m aware of, advocated for evidence over riding clinical decision making. In fact, I’d separate the two constructs if we are discussing treatment processes and interventions. The evidence for what we should do is separate from how we apply that evidence to an individual which is separate from how we make clinical decisions. To be fair, of course these all interact and affect one another. But, to make a concrete binary delineation that “one over rides the other” is, I agree, quite inaccurate. All of these topics are not choices of one vs the other, but interacting concepts affecting care delivery and patient treatment.

Comments are closed.