Measuring outcomes, treatment efficacy, and treatment effectiveness are separate yet interacting constructs. And, it’s more than semantics. Clinically, measuring outcomes masquerades as simple while interpreting these outcomes appropriately can be quite complex. Outcomes bias, or results oriented analysis, presents a significant challenge to the practicing clinician. Outcome measures measure outcomes, not effects of intervention:

Perhaps it is unfortunate that the physiotherapy profession has responded to the perception that physiotherapists must justify what they do by routinely measuring clinical outcomes. The implication is that measures of outcome can provide justification for intervention. Arguably that is not the case. Outcome measures measure outcomes. They do not measure the effects of intervention. Outcomes of interventions and effects of interventions are very different things. Clinical outcomes are influenced by many factors other than intervention, including the natural course of the condition, statistical regression, placebo effects, and so on. (Tuttle (2005) makes this point clearly in his article, in this issue, on the predictive value of clinical outcome measures.)

The implication is that a good outcome does not necessarily indicate that intervention was effective; the good outcome may have occurred even without intervention. And a poor outcome does not necessarily indicate that intervention was ineffective; the outcome may have been worse still without intervention. This is why proponents of evidence-based physiotherapy, including ourselves (Herbert et al 2005), argue it is necessary to look to randomised trials to determine, with any degree of certainty, the effects of intervention. It is illogical, on the one hand, to look to randomized controlled trials for evidence of effects of interventions while, on the other hand, seeking justification for the effectiveness of clinical practice with uncontrolled measurement of clinical outcomes.

Principles of Outcomes Measurement

1. Objective and Measurable
2. Decrease Bias and Improve Accuracy
3. Reliable and Reproducible
4. Valid: Are we measuring what we think?
5. Sensitive to Change: Does the measure detect changes in construct?
6. Patient Report vs. Patient Performance

In addition, measurement of outcomes requires understanding the various constructs and categories that are measurable. This includes, but is not limited to:

Patient Report


Patient Performance

  • Functional Test (5 x Sit to Stand, 6 Minute Walk Test)
  • Functional Task/Activity (squat, stairs)
  • Exercise or Activity Testing


International Classification of Function & Disability Framework

  • Impairments of Body Structure and/or Function
  • Activity Limitations
  • Participation Limitations


Body Systems Level

  • Cognitive
  • Neuromuscular
  • Musculoskeletal
  • Cardiopulmonary
  • Integumentary
  • Psycho-social


Health Services

  • Duration of Care
  • Frequency of Care
  • Number of Visits
  • Future Care Needs
  • Cost
  • Cost Savings
  • Morbidity



These are only a few select constructs and measurements. Another, arguably more complex area of assessment is the narrative and experiential outcome as described by the patient. The patient’s illness narrative, interpretations, and journey through potential suffering.

Differences and disconnect between progression of physical function via patient performance and patient report has been characterized in total hip arthroplasty. “The influence of pain on self-reported physical functioning serves as an explanation for the poor relationship between self-reported and performance-based physical functioning. When using a self-report measure such as the WOMAC, one should realize that it does not seem to assess the separate constructs—physical functioning and pain—that are claimed to be measured.” Both patient report and performance are important. Each can guide further intervention or provide insight into current deficits.

For example, a patient with improvement in performance, but no change in report, may be struggling with recognizing or understanding improvements in certain domains (symptoms, performance, function). Or, perhaps education has not addressed a patient’s main concern or perception. Mistaking outcome measures and measuring clinical outcomes for actual effect of treatment may result in improper (or even pseudo-random) intervention selection and/or patient care approaches. I postulate that this mistake is the prime reason physical therapy as a profession is quick to integrate new, “innovative” treatment “tools” with lack of true prior plausibility. Or, the continued utilization of of interventions in the face of evidence suggesting lack of treatment effect. Mistaking observed and measured clinical outcomes for treatment effectiveness likely results from the post hoc ergo propter hoc logical fallacy.

When we mistake outcomes for effectiveness, we risk assuming causation and subsequently treatment mechanism. Care must be to taken to avoid leaps in logic regarding effectiveness and mechanism of action. A review of the evolution of understanding of manual therapy mechanisms illustrates how continued observation of positive clinical outcomes likely reinforced inaccurate interpretations based upon hypothetical anatomy and biomechanics devoid of true physiology and actual tissue mechanics. We now know much more.

Although, to be fair, construction of care processes, intervention approaches, and treatment paradigms absent of (potential) theoretical mechanistic action is quite challenging. Further, human brains seek explanation for observed clinical events, even within research. So, when treatment X is routinely associated with observed patient report or outcome Y brains will automatically initiate assigning reason Z as the “why.”

Measure everything!

No. Quite the contrary. Clinicians should aim to properly select measures that are relevant to the patient: main complaint, goals, condition, and/or diagnosis (if one exists). In addition, the measures chosen should be sufficiently responsive to change, encompass multiple constructs, and cross domains. While important, relying solely on patient report is an incomplete, flawed approach to measuring outcomes and assessing treatment in the clinical setting.

Two differing scenarios may occur when utilizing outcomes observed or measured in clinic as the primary reasoning for decision making regarding interventions/treatment:

A. Effective interventions may be abandoned when outcome(s) are not improving on the assumption of lack of effect.
B. Ineffective interventions or approaches may be continued when outcomes are improving on the assumption of effect.

In scenario A, the patient may in fact worsen without the treatment. Perhaps progress is predicted to be slower without effective treatment, or natural history suggests a worse trajectory. An effective intervention or process may be ceased prematurely. In scenario B, perhaps improvement is measured. Placebo, non-specific effects, incentives, and/or bias in measuring and patient reporting contribute to the observation of a positive outcome in the clinical environment. “It works!” Or, appears to. But, a multitude of other factors affect the presence of a measured outcome (positive or negative).

The multi-factorial nature of treatment mechanisms, complicate the ability to clinically observe effectiveness. The myriad of reasons why individuals may report and/or exhibit improvements in symptoms, function, and other constructs make “outcomes” a dynamic and complicated subject. Perhaps the condition has a favorable natural history or regression to the mean is present. And, perhaps the patient would have progressed more quickly with a more effective treatment approach. It’s complicated. Don’t take all the credit, and don’t take all the blame. So, what should we do?

Measure nothing, clinical outcomes are meaningless!

No. Quite the contrary. In addition, to selecting appropriate outcomes measurements, clinicians must integrate and understand appropriate current clinical, mechanistic, and basic science research. As science based practitioners, physical therapists are charged to select effective, plausible, safe, and efficient approaches to care that are focused on the individual patient. This is not an argument for the utilization of only specific outcome measurements and interventions with strong randomized control trial level evidence. Plausibility matters. The individual person matters. It’s complicated. And, it’s easy to fool ourselves. Richard Feynman suggests:

The first principle is that you must not fool yourself — and you are the easiest person to fool.

So, measure clinical outcomes. They are important. But, ensure measurements cross constructs and domains. Don’t solely rely on patient reports. And, don’t claim effectiveness based on observation. We must acknowledge the complexity. No one is saying clinical outcomes measurement is not important, or is not illustrative of important concepts. Clinical data and outcomes are vital to self-reflection, integration of evidence, health services, and overall care processes. But, the plural of anecdote is not data, and outcome measures can not illustrate effectiveness. That’s not an argument to not measure outcomes. It’s an argument to improve measurement, and more importantly, understanding.

Resources

1. Evidence Based Physiotherapy: A Crisis In Movement
2. Causation and Evidence Based Practice: An Ontalogical Review
3. Casual Knowledge in Evidence Based Practice
4. Mechanisms: What are they evidence for in evidence based medicine?
5. Placebo use in pain management: The role of medical context, treatment efficacy, and deception in determining placebo acceptability
6. Placebo Response to Manual Therapy: Something out of nothing?
7. The Mechanisms of Manual Therapy
8. The influence of expectation on spinal manipulation induced hypoalgesia
9. Evidence for a direct relationship between cognitive and physical change during an education intervention in people with chronic low back pain
10. The contributing factors of change in therapeutic process
11. RehabMeasures.org