Harold’s Hillock: ABA, Equivocation and Other Feints
There’s a problem on Harold Doherty’s blog. It’s just about impossible to carry on a civilized conversation with anyone without Harold stepping in and ruining the reciprocal nature of it. Case in point, Harold had a post on the myth of excluding very low IQ children from the Lovaas 1987 study. I chip in with the fact that the myth isn’t a myth. Trouble is, I remember what I read really well, just not where I read it, so I can’t cite the reference. Harold dives in, and not just with a ‘please cite the reference’. Fortunately, Michelle blogs about it as a form of holiday from her Tribunal case, which is harrowing and giving her sleepless nights no doubt. So, I refer Harold to her blog for the proper references - and nothing happens. Harold won’t allow the post. OK, it does wreck the reciprocal nature of information exchange, but it’s not that big an obstacle, a hillock really, since I can blog about it on my own site. But, you must admit it’s annoying, especially when the matter is quite a bit more serious.
In the same post on Harold’s blog, I also made a reference to the Sallows and Graupner study demonstrating the random effect of the intervention (
Here’s what Anonymous had to say:
>>>alyric wrote that the paper demonstrates "that the therapy is not the critical factor in the outcome". I thought it might be important to recognise that both therapeutic groups DID recieve [sic]
True but not relevant, yet.
>>>Table 1 indicates mean hours of
Getting warmer, but just a tad misleading there and did not mean to be, I’m sure. The hours for the second year were 36.6 (clinic) and 30.9 (parent). There’s still a considerable gap. But, that’s not the only gap. The supervision regime is markedly different: 6-10 hours per week (clinic) as against 6 hours per month (parent).
>>>>If you look at the data reported prior to combining scores the clinic directed group of rapid learners that met criteria for best outcomes is close to 38% while 60% of the parent directed rapid learners met criteria for best outcomes. All recieved [sic]
Exactly and with much less intervention and much less professionally supervised, you have markedly better outcomes. The difference between 60% and 38% is more than enough to do two things – bury this as a replication of Lovaas and cast considerable doubt that there is any correlation whatsoever between the intervention and the outcome. When data looks like this, one tends to think “random”. Another thing to keep in mind is that the clinic group also had an advantage on some of the variables that Sallows et al seem to think might be highly correlated with outcome, like IQ. The clinic group pre-treatment mean IQ was 60.4 compared to 51 for the parent group. VAB scores and verbal imitation scores were also significantly lower for the parent group compared to the clinic group. Didn’t do too well with that head start, did they?
>>>>One confound that may be at play that may account for some of the variance is alluded to by the authors when they report that parents "...were encouraged to to extend the impact of treatment by practicing newly learned material with their child throughout the day..." (p 420). It begs the question as to if the parents who were more engaged in "directing" home based programs were in fact also more inclined to extend therapy hours as they had a more intimate knowledge of programs? <<<<
Yes, they did say that on page 420, but they also said the following:
P432 – “Senior therapists rated parents on the percentage of involvement in their child’s treatment during the first year. Although the correlation with outcome, r = .32, was not significant.” That’s a very low level of correlation.
P434 – “First, ratings of parental involvement were weakly correlated to outcome.”
There’s a flat contradiction there and all the more puzzling because they had data to show that there was no correlation between parental involvement and outcome, so why try to use it to explain the good results for the parent group? But this is how they interpreted the differences in data between the parent and clinic groups:
“Parent-directed children, who received 6 hours per month of supervision (usually 3 hours every other week, which is much more than “parent managed” or “workshop” supervision), did about as well as clinic-directed children, although they received much less supervision.”
I think this is in the class of a ‘whopper’. They have such a huge amount of statistical data in this study and here we have 60% good outcomes being considered more or less equal to 38%.
Both the whopper and the contradictory analysis are worth a letter to the American Journal on Mental Retardation asking for an explanation, in the fond hope that there will be an erratum published at least for the whopper. By nature, I’m a pessimist.
>>>>The primary question the authors attempted to address seems to have been whether a Community based program could achieve similar results to what Lovaas (1987) did (without using aversives) (p419). If you think about it... the entire program, both parent directed and clinic directed service models, were in fact facets of the WEAP community based program... it appears that as such, they are pretty darn close to in fact achieving such outcomes.<<<<
Again that’s what they said at the start of the 2005 paper. However, way back at the beginning here’s what the title and abstract said:
Replicating Lovaas’ Treatment and Findings: Preliminary Results1
Glen O. Sallows and Tamlynn D. Graupner
Abstract
Twenty-four autistic children completed the first year of a three-year replication
study of the 1987 research published by Lovaas. Changes in pre-post test
scores showed an average gain of 22 IQ points. Nineteen of the children
matched those in Lovaas’ study. Eight children showed a gain of 45 IQ points,
raising them into the average range. Gains in adaptive/social skills rose to the
low average range. These “best outcome” children represented 42% of the
matched group. Several factors related to outcome and its prediction are
discussed.
Originally they were out to duplicate Lovaas’ findings and that’s perfectly OK. What is not OK is to change tracks along the way and not even attempt to address the discrepancies between their initial aim and their final results. Well, it’s not OK if this were a paper in a scientific journal. But I’ve said before, behaviour analysis does not use the methods of Science. I’ve also said that behaviour analysts tend not to learn from other areas in psychological research. This study demonstrates that apparently they cannot learn from their own research. Carrying on as if this study is nothing out of the ordinary does nothing to advance understanding; it simply ignores the discordant in favour of maintaining a very shaky status quo. There is another plausible explanation of the Sallows and Graupner results. Calling
Is Behaviorism becoming a Pseudoscience?, by Jerome C Wakefield (2006) looks at token economies in the mental health system as an alternative to pharmaceutical approaches to managing schizophrenic patients. Apparently Wong, Midkiff and Wyatt lament that behaviour analytic management is cast aside in favour of pharmaceutical solutions. Here’s equivocation at its best (or worst).
“Bizarre responses, most notably psychotic speech, will at times resist contingency management procedures….or will spontaneously recover over time….or when training has ended.” Has any possible outcome been omitted here?
They go on:
“These results have been interpreted as showing that clients’ underlying belief systems have remained intact despite behavioral training.” No kidding!
And –
“However, multiform and persistent bizarre verbalizations can be parsimoniously viewed as generalized responses with a long history of intermittent reinforcement. After being positively and negatively reinforced by different people in various situations over many years, bizarre verbalizations could be overlearned responses that resist contingencies administered in circumscribed therapy sessions over mere weeks or months”. In other words, no amount of data is enough to disconfirm the behaviorist explanation and as should be really familiar, the therapy would have worked if they had had more time.
Paul wrote a critique of Wakefield's critique, which readers should look at. It looks to me like really high quality equivocation, but Wakefield did ignore some things, notably, the nature of the population in the study - a very big factor. This does not however, make up for lack of parity of reasoning or as another reviewer pointed out - the effects of the aversives, which Paul attempts to equivocate his way around.
4 Comments:
Thanks for visiting my site.
This comment has been removed by a blog administrator.
Hi Alyric,
I am a new visitor and reader of your blog...wanted to say hi...you have nice blog on Autism...thank you for sharing...
Health Watch Center.
Self Hlep Zone
great blog.
Incisive comments on important issues. Autism advocacy group needs you.
Post a Comment
<< Home