Every year Edge poses a question and subsequently curates responses from a variety of intellectuals. The result is a fascinating compilation of short essays on a range of ideas orbiting around a central theme. Previous questions include “what scientific concept would improve everybody’s toolkit?” and “what is your favorite deep, elegant, or beautiful explanation?”
And in 2014, “what scientific idea is ready for retirement?”
Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858-1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals. Why wait that long?
WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?
Ideas change, and the times we live in change. Perhaps the biggest change today is the rate of change. What established scientific idea is ready to be moved aside so that science can advance?
by Laurie R. Santos and Tamar Gendler
The lesson of much contemporary research in judgment and decision-making is that knowledge— at least in the form of our consciously accessible representation of a situation—is rarely the central factor controlling our behavior. The real power of online behavioral control comes not from knowledge, but from things like situation selection, habit formation, and emotion regulation. This is a lesson that therapy has taken to heart, but one that “pure science” continues to neglect.
So, in knowing that knowing is not nearly sufficient, what are we to do? Metacognition, reflective experience, and recognition of incentives are components of improving our decision making performance. Studying cognitive bias, behavior change, and philosophical argument also aid critically thinking. Beyond our own personal, and professional, growth these concepts are salient within clinical practice, patient care, research, and education.
by Robert Sapolsky
The problem with “a” gene-environment interaction is that there is no gene that does something. It only has a particular effect in a particular environment, and to say that a gene has a consistent effect in every environment is really only to say that it has a consistent effect in all the environments in which it has been studied to date. This has become ever more clear in studies of the genetics of behavior, as there has been increasing appreciation of environmental regulation of epigenetics, transcription factors, splicing factors, and so on. And this is most dramatically pertinent to humans, given the extraordinary range of environments—both natural and culturally constructed—in which we live.
Sapolsky’s essay illustrates complexity and interaction. While simplicity is important, caution is warranted to avoid over generalization. And, of course, do not discount the effect of dependency. Everything depends; on incentives, environment, previous experience, the current situation, and more. Some factors, and influences, likely have not even been identified. This is especially true in the realm of humans and behavior. An interesting book The Dependent Gene: The Fallacy of Nature vs. Nurture is an in depth exploration and refutation of inaccurate understandings of genetics.
by Gary Marcus
But science still revolves, most fundamentally, around a search of the laws that describe our universe. And the one thing that Big Data isn’t particularly good at is, well, identifying laws. Big Data is brilliant at detecting correlation; the more robust your data set, the better chance you have of identifying correlations, even complex ones involving multiple variables. But correlation never was causation, and never will be. All the big data in the world by itself won’t tell you whether smoking causes lung cancer. To really understand the relation between smoking and cancer, you need to run experiments, and develop mechanistic understandings of things like carcinogens, oncogenes, and DNA replication. Merely tabulating a massive database of every smoker and nonsmoker in every city in the world, with every detail about when they smoked, where they smoked, how long they lived, and how they died would not, no matter how many terabytes it occupied, be enough to induce all the complex underlying biological machinery.
Physical therapy likely under utilizes big data and available data sets. None the less, big data inherently contains all the methodological issues of any study or data set: sampling bias, reliability, validity, blinding, confounding factors, operational definitions, and control. Remember, garbage in = garbage out. Big data must be put into context, and properly analyzed. What question is being posed? What questions can the data actually answer? And, what are the limitations? Quantity in data is never a direct substitute for quality. As Marcus asserts, “we should stop pretending big data is magic.” Data, after all, is just data, and is nothing without analysis. Analysis, if conducted improperly or with bad assumptions, breads false interpretation.
by Joel Gold and Ian Gold
That a theory of mental illness should make reference to the world outside the brain is no more surprising than that the theory of cancer has to make reference to cigarette smoke. And yet what is commonplace in cancer research is radical in psychiatry. The time has come to expand the biological model of psychiatric disorder to include the context in which the brain functions. In understanding, preventing and treating mental illness, we will rightly continue to look into the neurons and DNA of the afflicted and unafflicted. To ignore the world around them would be not only bad medicine but bad science.
The direct parallels to physical therapy are striking. Psychological, contextual, and social factors are now recognized as foundational contributions to symptoms, function, intervention mechanisms, and treatment response. How will education, clinical practice, and research evolve to accommodate such insights?
by Kate Mills
However, limiting the potential contributions of informally trained individuals to the roles of data-collector or data-processor discounts the abilities of citizen scientists to inform study design, as well as data analysis and interpretation. Soliciting the opinions of individuals who are participants in scientific studies (e.g., children, patients) can help traditional scientists design ecologically valid and engaging studies. Equally, these populations might have their own scientific questions, or provide new and diverse perspectives to the interpretation of results.
There is a growing need for individuals to occupy the historically ill defined black box between research science/academia and clinical practice. Further, healthcare yearns for clinicians approaching patient care, clinical data collection, and clinical care from a scientific lens. Scientific and clinical research is challenging, time consuming, and expensive. Well trained, highly skilled researchers and scientists a requirement, I won’t argue otherwise. But, interpreting and translating such knowledge into clinical practice is an often under appreciated skill set. And, formal academic or clinical roles devoted to this necessary bridge are sparse. Conversely, formal means of translating clinical practice, clinical data, or even clinical observations into research (or to academia in general) are lacking. Well designed quality improvement projects are a means of fusing clinicians and scientists, clinical care and research. Yet, many clinicians lack the time and expertise to design projects, ensure proper data collection, analyze, and write manuscripts. Despite this clinicians (and academics) must: ask questions, collect data, and collaborate. We all have a role to play in knowledge creation and translation.
by Fiery Cushman
There is, of course, an alternative and very plausible source of big effects: Many small explanations interacting. As it happens, this alternative is worse than the wrong tree—it’s a near-hopeless tree. The wrong tree would simply yield a disappointingly small explanation. But the hopeless tree has so many explanations tangled in knotted branches that extraordinary effort is required to obtain any fruit at all…
…the world has large problems that demand ambitious scientific solutions. Yet science can advance only at the rate of its best explanations. Often, the most elegant ones are clothed around effects of modest proportions.
Physical therapy, as a profession, appears to fall victim to the search for big explanations. The factor, the intervention, the mechanism, the tool, the explanation. But, pause and appreciate the complexity of the therapeutic process. It’s complicated, it’s dynamic, and it takes time. Therapy interventions, inherently, must be the result of a multitude of smaller factors interacting, for better and for worse, to produce effect.
Immediately, I’m reminded of the effective philosophical tool of Occam’s Razor. Now, many misquote Occam’s assertion as the simplest explanation should always be favored. But, there is an often missed subtlety to the Razor originally sharpened by William of Occam. The premise is that the hypothesis with the fewest assumptions should gain favor, not necessarily the simplest. Often, the more complicated the hypothesis the more leaps of logic, and thus, the greater the number of assumptions. But, remarkably simple explanations may rely on astounding, and even grossly inaccurate, assumptions. Or worse, may be initially founded upon a false premise upon which all subsequent assumptions are, to some extent, wrong.
by Jonathan Haidt
Occam’s razor is a great tool when used as originally designed. Unfortunately, many scientists have turned this simple tool into a fetish object. They pursue simple explanations of complex phenomena as though parsimony were an end in itself, rather than a tool to be used in the pursuit of truth.
Don’t make unwarranted, unfounded assumptions. But, be wary also of oversimplification. Simple is nice, but accuracy is better. It’s complicated after all.
by Emanuel Derman
But nowadays the world, and especially the world of the social sciences, is increasingly in love with statistics and data science as a source of knowledge and truth itself. Some people have even claimed that computer-aided statistical analysis of patterns will replace our traditional methods of discovering the truth, not only in the social sciences and medicine, but in the natural sciences too.
I believe we must be careful not to get too enamored of statistics and data science and thereby abandon the classical methods of discovering the great truths about nature (and man is nature too).
Statistics, despite being a field of study, are merely a tool. And, thus are prone to be misused, abused, and misunderstood.
by Charles Seife
It was designed to help researchers distinguish a real effect from a statistical fluke, but it has become a quantitative justification for dressing nonsense up in the mantle of respectability. And it’s the single biggest reason that most of the scientific and medical literature isn’t worth the paper it’s written on.
When used correctly, the concept of statistical significance is a measure to rule out the vagaries of chance, nothing more, nothing less…
Nevertheless, even though statisticians warn against the practice, it’s all too common for a one-size-fits-all finding of statistical significance to be taken as a shortcut to determine if an observation is credible—whether a finding is “publishable.” As a consequence, the peer-reviewed literature is littered with statistically significant findings that are irreproducible and implausible, absurd observations with effect sizes orders of magnitude beyond what is even marginally believable.
Beyond the misuse of significance mentioned, the concept of clinical significance is also important. Statistical significant alone can not support clinical significance, or clinically meaningful effect. And, paradoxically, apparent clinically meaningful effects are not sufficient for statistical significance. Partially, this is why research attempts to construct minimally clinical important change (MCIC) or difference (MCID) for certain measures. Although, even this concept is likely elastic. But, at the foundation of the entire issue, is the misunderstanding and misapplication of statistical significance.
by Ian McEwan
Every last serious and systematic speculation about the world deserves to be preserved. We need to remember how we got to where we are, and we’d like the future not to retire us. Science should look to literature and maintain a vibrant living history as a monument to ingenuity and persistence.
I’d contend that more than a few ideas within physical therapy are ready, in fact likely over due, for retirement from discussion and clinical care. What are they? And, how do we guide them into the history books and out of our text books? Can we pay our respects to these ideas by identifying the potential mistakes within them? What lessons could be learned without perpetuating artificial relevance and unwarranted influence?