Author Topic: Lifelong Progressive Lying  (Read 539 times)

0 Members and 1 Guest are viewing this topic.

Offline Anonymous

  • Newbie
  • *
  • Posts: 164653
  • Karma: +3/-4
    • View Profile
Lifelong Progressive Lying
« on: June 09, 2009, 05:01:23 PM »
"Michael Quinn Patton, former President of the American Evaluation Society, was the principal author of an outcome evaluation of AARC in 2003. His findings showed that AARC clients were suffering severe losses prior to their admission to the program:"
http://www.aarc.ab.ca/qa.php#q2
"PATTON: They developed the instrument. I reviewed the instrument with him [his grad student] to be sure that the analysis could be done. And it looked fairly straightforward. He went to Edmonton and spent two or three days up there as part of putting this together. And I actually don’t remember what all he did.

It was clear that our role was not to validate the program or endorse the program in any way. And that’s not what the report does. In fact, you’ll note that the report very carefully describes it as an outcomes study only. So there’s nothing that he was involved in or that I was involved in, in actually looking at the model that they do. We had no involvement with that. There’s no documentation of the model.

There was no direct contact in data gathering in the instrument. So it was to review that the questions were appropriate questions for an outcome study. And then they gathered the data, and he analyzed it with a colleague at Hazelden Foundation.

CBC: Okay. Were you involved in evaluating the data itself?

PATTON: We provided them with the analysis that is the centrepiece of the report, that is how the results came out. I remember adding to the limitations section, which I cited to you yesterday, trying to be careful that the report was not inappropriately used. And so that was my main contribution. But the data analysis that’s presented in that outcome study is right out of the results that they sent to Hazelden.

CBC: Okay. So the study then—who wrote the words that are the bulk of it that we see there? I have just seen the version that I sent to you, and I assume that it looks pretty much like what you have I guess?

PATTON: Yes.

CBC: Is it…

PATTON: Yes.

CBC: … pretty much the same thing?

PATTON: I mean the findings are descriptive findings for the most part—this is what the data said. As I recall there was some back and forth in the final writing about how much was going to be in it and what. So that there are, what, three or four names listed on it, and I presume everybody did some of the writing. I certainly reviewed it, especially adding to the limitations section. But the focus is on what the followup results were, as reported in the questionnaire and interviews, and so it is not more than that.

It’s not a—there are always difficulties in this kind of self-reporting. It is common in evaluation in general and chemical dependency programs specifically to have the problem of relying upon what people tell you after the fact.

So this doesn’t include independent validation of that. I remember that there were some parents contacted where they couldn’t reach the kids. And of course that’s another source of data, but that’s subject to its own problems. So I would treat it as a fairly modest study that is one part of a bigger puzzle, not as a definitive piece of work, as I told you yesterday.

* * *

CBC: It occurred to me that because some of the kids there are court-ordered and would have been on probation and part of their probation condition is that they attend there, now if they are being interviewed by someone from the program, would they have perhaps a bias or an incentive to underreport substance abuse…

PATTON: Sure.

CBC: … because in that case they could have trouble with the court?

PATTON: All the kids could have incentives for underreporting. They want to please the program staff if they have relationships with them. They know what outcome they’re supposed to report. They want to show that they’ve done well. In some cases, people actually believe that they’re doing better than they are. They’re in denial themselves about their use patterns. There’s the problem you mentioned of the court. That’s all the problem with self-reporting data. There are lots of reasons why self-report data are a first level, the very first thing you do to see how it looks on the surface.

There are studies where the self-report data are bad enough that you say, “Well, it’s not worth going to a lot of trouble to validate it because the self-report data are weak.”

And self-report data work best where the data are gathered anonymously by independent people, where there’s no incentive to worry about. And you know, it gets harder with kids to understand what it means when they’re told that they won’t be identified as individuals, that their data will only be aggregated with other kids, that nobody will know what their responses are. And they sign consent forms saying that they understand all of that. But you don’t really know if they trust that.

CBC: In this case was it done anonymously?

PATTON: Well, it can’t be done anonymously because they’re interviews.

CBC: Oh, there were interviews in this case.

PATTON: Yeah.

CBC: It wasn’t anonymous interviews.

PATTON: No."


“I did not conduct the study. They conducted the study. I oversaw the analysis,” he said.
« Last Edit: December 31, 1969, 07:00:00 PM by Guest »