Efficiency is an important concept in treatment, representing both the effect of the treatment and the work/time required to achieve the effect. Here is a simple new statistic with potential to increase the power to detect the efficiency of a treatment in a treatment outcome study.
Most treatment studies now report effect sizes, which, on the specific outcome measure being evaluated, provide a standardized way of conveying the impact of the treatment, the magnitude of change in the outcome measure. In treatment comparison studies in which the design specifies that the treatment ends when symptom reduction criteria are met, the number of sessions to termination is normally reported. Although effect size and number of sessions to termination are appropriate outcomes to examine, when treatment efficiency is divided into two separate components, the effect is watered down and may be insufficient for statistical significance. Thus, even though both trends may favor one treatment, it cannot be interpreted with confidence.
This is of particular concern in small-N pilot studies, in which the statistical power often is insufficient to detect even clinically significant effects. We faced this problem in a recent study (Jaberghaderi, Greenwald, Rubin, Zand, & Dolatabadi, 2004). With only seven participants in each condition, the differences in effect sizes were large, but not statistically significant. So we combined these two concepts—amount of change, and number of sessions—into a single “miles per gallon” type of efficiency statistic. To calculate the change per session for each treatment condition, we simply divided the mean amount of raw-score change on a given outcome measure (in that treatment condition) by the mean number of sessions for the treatment condition. By combining these into a single efficiency statistic, the power of detecting actual differences between treatment conditions was enhanced. Furthermore, since magnitude of change and number of sessions both represent aspects of efficiency, using such a combined efficiency statistic can convey more effectively a potentially important aspect of the study’s findings.
The statistic is most appropriate in studies such as Jaberghaderi et al., in which treatment is ended when termination criteria are met. In studies stipulating a fixed number of sessions, there may be no way of determining how many sessions it actually took to effect the change, so the change-per-session analysis would not make sense. Because efficiency may prove to be an advantage of one treatment over another, this type of analysis should be considered in other treatment comparison studies.
This brief report is sponsored by ISTSS’s Special Interest Group on Research Methodology. The Special Interest Group seeks to foster communication among and between investigators and clinicians, and to provide opportunities to learn state of the art analytic techniques relevant to trauma research. If you are interested in becoming a member of the Interest Group, please contact chairs Dean Lauterbach at firstname.lastname@example.org or Dorie Glover at Dglover@mednet.ucla.edu .
Jaberghaderi, N., Greenwald, R., Rubin, A., Zand, S.O., & Dolatabadi S. (2004). “A comparison of CBT and EMDR for Sexually Abused Iranian Girls.” Clinical Psychology and Psychotherapy, 11, 358–368.
Rogers, S. & Silver, S.M. (September 2003). “CBT v. EMDR: A Comparison of Effect Size and Treatment Time.” Poster session presented at the annual meeting of the EMDR International Association, Denver.
Ricky Greenwald, PsyD, is executive director of the Child Trauma Institute, Greenfield, Massachusetts.