Are we teaching it backwards? Without understanding premise or argument validity in relation to research, an individual article analyses may be useless. A study may be flawed on premise alone even with strong methodology and statistically significant results. A valid argument is false. And, inappropriate conclusions will be drawn. Likely, this will lead to misguided justifications and explanations. Such errors can affect clinical practice, education, and future research.

Plenty of systems and check lists exists for systematically evaluating the quality of an individual study including it’s design and methodology. The PEDro scale  (PEDro Website), The Consort Statement25 Item ChecklistChecklist for Qualitative AnalysisPRISMA for Systematic Reviews and Meta-analyses, and other Critical Appraisal Checklists all guide and contribute to appropriate analysis. But, a critical review should not only critique the rigor of the inquiry and accuracy of the conclusions, but also, and maybe more importantly, assess the study’s plausibility in context of the whole of current scientific understanding.

What does this tell us? What can it tell us given the design? What doesn’t it tell us? Too often scientific research, clinical and otherwise, is interpreted too broadly and thus inaccurately. It’s one of the major flaws of popular “pop” science journalism: over reaching conclusions. The sensational headlines touting miracle cures and “bad” foods that cause cancer. But, even a narrow lens of assessment focused only the specific study may lead to improper understanding. Outside of a purely methodological critique, an article analysis can not, must not, be done in isolation.  The current state of the literature on the topic specifically, in conjunction with basic science generally, must be taken into account.

Is this plausible?

Plausibility must not only take into account previous clinical research and outcomes studies (efficacy and effectiveness), but also basic science and current mechanistic research. Such an approach prevents reinforcing an unlikely or inaccurate explanatory model despite positive outcomes. Unfortunately, physical therapy is likely plagued by positive outcome studies misinterpreted, and thus explicitly or implicitly, supporting a theoretical construct that is (may be) invalid. One example, more specifically, is the variance in explanatory models of manual therapy effect.

What are my beliefs? Biases? Preferred treatment constructs and approaches?

An overlooked area of assessment is ourselves. The person doing the analyzing. It’s imperative that the critical lens of analysis be pointed back upon its user. Rarely will an orthopedic manual physical therapist postulate that manual therapy does not work.  The very best may ponder if the mechanisms are completely outside the current understanding. A physical therapist practicing in an ICU rarely questions the effectiveness of movement and mobility. But, clinicians and researchers should strive to rigorously falsify via the scientific method in order to focus accuracy and understanding over time. Physical therapists are inherently, and understandably, focused on the specifics of treatment that appear most important. What exercise? What technique? What works? Yet, the scientific rigor, and uncomfortable thought, of attempting to prove physical therapy does not work will lead to more specific knowledge on why it does work and the potential attainable outcomes. Seems contradictory, but falsifiability is the basic tenant of hypothesis testing in science. So, ask yourself: what would it take to change my mind? It’s time for some serious critical thinking.

Points to Ponder

  • Hypothesis & Null Hypothesis
  • Plausibility of Hypothesis based on previous research and overall knowledge
  • Methods Critique (utilize checklists)
  • Efficacy vs. Effectiveness Design
  • What is the comparison or control group?
  • Are these groups similar in abstract variables such as frequency, duration, and one on one time?
  • Believability of the comparison or placebo by patient?
  • What the results can tell us given study design
  • What the results can NOT tell us given the study design
  • Plausibility of results from author’s interpretation
  • Plausibility of theoretical model presented or utilized
  • Plausibility of the discussion & conclusion  in relation to understanding on the topic specifically
  • Plausibility based on basic science, physics, mechanics, including tissue mechanics, physiology, psychology
  • How else could the results be explained? Placebo? Regression to the mean? Different mechanisms?
  • Did the authors make the appropriate conclusion?
  • What’s YOUR conclusion and understanding?
  • Overall summary and critique
  • How and why to integrate?

What is the take away?

“That’s valid,” you say, but what do you mean by that? A single statement can be valid by itself if it is a previously proven “truth”, but what about an argument? You remember arguments, right? Premise, premise, therefore conclusion? Funny thing about valid arguments, they have nothing to do directly with truth. Arguments can be valid and false at the same time, just as they can be invalid and true at the same time. What?

Since deductive arguments are the basis of all research, you need to understand this concept. I have quoted before on a podcast, “A flawed study is still a flawed study regardless of p-value or level of evidence. – Erik Meira, When a valid argument can be false

The complications continue. Concepts such as placebo, non-specific effects, nocebo, incentives, behavioral psychology, decision making, logical fallacies, cognitive biases, and epidemiology all play vital roles in not only which treatments we (should) utilize, but how they (may) work. In addition to clinical and scientific research, the understanding of the how’s and why’s of decisions in clinical practice rest upon these concepts (by jennifer). Given where trials of physical therapy interventions are published it’s imperative to read outside the physical therapy specific literature. Research in psychology and behavior assists in a deeper understanding of the importance of the entire treatment encounter in addition to how clinicians make decisions within a treatment encounter.

It is reflective and complex decision-making that integrates all sources of evidence that we should be having serious conversations about, and its that thoughtfulness [PDF] that is required of a doctoring profession – not the myopic and obtuse yes or no to the question: “Are you evidence based?” – Jason Silvernail, DPT, DSc

So, stop volleying RCTs back and forth in an evidence ping pong match, and begin integrating knowledge. The information from seemingly unrelated fields contain insights that can result in true evolution in our understanding of clinical practice. Surprisingly, even many of the randomized control trials of physical therapy interventions are not published in physical therapy specific journals. Ponder how  the “evidence base” should be selected.

We rarely believe we are ignorant, but could we be wrong?