"If you're a mathematician and you want to really have an impact on science, you need a catchy name," University of Michigan statistician Susan A. Murphy declared to the audience gathered in the MAA Carriage House on May 29.
Which is why Murphy is grateful to her behavioral scientist collaborator, L. Collins, for devising the acronym SMART to capture the salient characteristics of the sequential, multiple assignment, randomized trials she has been instrumental in pioneering.
In "Getting SMART about Adapting Interventions!" Murphy told her Carriage House audience what SMART designs are, what they're currently used for, and how she is working to generalize them for delivering just-in-time adaptive interventions to recovering drug addicts.
Overcoming an addiction is a process, much like regaining mobility after a car accident or managing attention deficit hyperactivity disorder (ADHD). The process requires more than a single visit to a clinician, a single course of medication, or a one-time suggestion of behavior modification.
Standard clinical trials compare the efficacies of two medications or the efficacy of one treatment to a control. They do not, Murphy pointed out, help physicians decide how to use the medication or behavioral therapy in question. They do not inform the tactics of clinical practice.
But adaptive interventions do. Murphy defined adaptive interventions as "individually tailored sequences of interventions, with treatment type and dosage changing according to patient outcomes."
Typically an adaptive intervention is based on a combination of behavioral and biological theory, clinical experience, and expert opinion, but Murphy's SMART designs unleash the power of data to address questions about the sequencing of treatments, the timing of treatment alterations, and individualizing treatment.
Suppose, for instance, that a child has been diagnosed with ADHD. Experts disagree about whether behavioral modification or medication—Ritalin, say—more safely and effectively manages the condition, so which treatment should be tried first? If a child has already received medication, should the prescription be extended, or should behavioral modification be added to the treatment plan? How should nonresponse to a treatment be factored into treatment decisions?
Murphy described how she and her collaborators used Q-learning to derive a proposal for an optimal adaptive intervention. Q-learning, she explained, is an extension of a statistician's favorite tool—regression—to multiple decisions.
The treatment policy that resulted from the multistep statistical process Murphy outlined offered a decisive recommendation, but was the study's sample size—138—large enough to allow researchers to stand staunchly behind their proposed treatment policy? To quantify their certainty, Murphy and her colleagues calculated confidence intervals.
"To get these confidence intervals involved exciting statistics," Murphy said, relishing the memory.
Trials like the ADHD study Murphy discussed have brought much-needed rigor and clarity to clinical practice, and funding agencies have noticed.
Murphy reported that, because of the desire to find good treatments for chronic disorders, the National Institutes of Health has this year put out four requests for proposals that call for SMART or adaptive intervention designs.
Murphy herself spends "all" her time these days puzzling over how to use data to inform delivery of just-in-time adaptive interventions by wearable devices. She aims to improve recovery chances among former drug abusers.
Once you've given a recovering offender a smartphone equipped with various behavioral recovery supports such as meditation suggestions, favorite music, and preprogrammed text messages urging them to ask for help from friends, she asked, how do you develop a behavioral intervention that provides personalized treatment right when they need it as many times a day as they ask for help?
"That's what we're doing right now," Murphy said. "We'll see how it goes in a couple of years." —Katharine Merow