Nice try (??) but the "he" I was referring to was the reporter that went to live at a treatment facility (the second link posted by Whooter.)
But back to the study... I don't know who Ellen Behrens is, but this evaluation was clearly conducted and written by a professional evaluator. (I recognize this from my own graduate studies in program evaluation and analysis.)
A comparison group is not necessary for this kind of impact evaluation. The research design and methods described are still solid - it is a non-experimental design evaluation with a pre-test and post-test. I agree that an additional later post-discharge analysis would be ideal, and so did the authors. Here is what the authors said:
"Future research in private residential treatment needs to address the question of post-discharge maintenance of treatment gains. The residential treatment literature indicates that a significant portion of adolescents who function well at discharge subsequently experience a decline when transferred to a lower level-of-care (Curry, 1991; Epstein, 2004; Hair, 2005). The second phase of this study will explore that issue using the private residential data of the present study as the point of comparison.
Private residential treatment research would also benefit from process-focused studies that
attempt to attribute change to specific components of treatment. Private residential care is so
multi-facetted and complex that it is less an intervention and more a “tapestry” of interventions
(Fahlberg, 1990). As such, attempts to tie program components to outcomes would have
profound clinical implications.
Whether in process or outcome studies, future research in private residential treatment should pay
attention to the role of three factors: the “trajectory of change”, family involvement, and
aftercare. "
I am guessing that later post-discharge surveys were not done because they are difficult to obtain. By the way, regarding the pre-tests post-tests here, credible researchers would administer the tests themselves and if that is not possible, would ensure that the integrity of the measures were maintained. I.e. nobody would be holding a stick (literally or figuratively) over the participants heads. And by the way, in this study the kids and their parents both reported improvement.
Are you sure this study was not peer-reviewed? In any case, it was certainly vetted if it was presented for 50 minutes at the APA convention.
I just think the study is interesting and would like to see more of this kind of thing.