The Dunning–Kruger effect came up in a comment on this blog recently(the only one so far!). The study published in The Journal of Personality and Social Psychology (full text here), suggested that individuals with low levels of competence are both unaware of how incompetent they are and are relatively ineffective at recognising actual skill in others (click here for a rundown). The effect was brought up in reference to a post I made on implicit theory by Dweck and Leggett. Comparing the two studies is something I would like to go into in another post. However, the reason I am posting on it here is it came to my mind today when purchasing SPSS.
My previous licence for SPSS ran out just last month and with 3 months on the PhD left to go I needed to buy it again (talk about bad timing). Anyway, theversion of SPSS that I brought came with a copy of the latent modelling software AMOS. Now I have been doing latent modelling for about 5 years now using LISREL. LISREL is about 10 times harder to use but I have persisted and here is why. In learning to use LISREL I needed to learn how to write all the correct syntax giving me a pretty good idea of what was going on under the hood (I even learnt how to do a CFA by hand!). With AMOS it is a case of drawing the model you want and pressing go (almost only requiring relatively base level skills in Microsoft paint).
My concern is that as products like AMOS become available, people with less and less statistical skills are increasingly becoming able to access and test very complex statistical models. With this I wonder how much the old "unskilled and unaware of it" is taking place in much of today’s social sciences. It also makes me wonder whether increasingly user friendly research tools are really as beneficial as they seem on the surface.
The interesting thing is that latent modelling programs have made complex models so easy to develop and test that researchers are also becoming increasingly unaware of the skills they do have. Thus we are increasingly seeing huge multi-stage models with paths going all over the place (often developed with the aid of modification indices), with the only criteria used to judge its veracity is whether the fit indices reach the magically numbers. What happened, I wonder, to the law of parsimony and on basing research models on a detailed examination of theory rather than based purely on what a computer tells us looks good!
~Phil
There's quite a good run down of the Dunning-Kruger effect here: http://www.youtube.com/watch?v=XyOHJa5Vj5Y
ReplyDeleteGood post though, I agree that statistical programs, by making things easier, are actually reducing our true knowledge.
I think I'm a prime example of your point: me and statistics do not mix at all. Give me SPSS though and I can generate some reasonable looking numbers. Do I know what they mean? Vaguely. Could I use them to support or disprove an idea? Definitely.
I suppose the same arguments could be made for using things like calculators instead of abacuses. The only thing we need to ask, I think, is: are the benefits of being able to do more and more complicated analyses outweigh the disadvantages of depriving some people of an understanding of the fundamentals of a subject area?
Yep the whole taking the technology away from the masses thing was going though my head as I wrote this. Not sure what to make of it really. One the one hand people should have access, on the other so many people do such a bad job and so often they are completely unaware about the detrimental mistakes they are making. Likewise from a consumers perspective there is no way of knowing what sorts of little fiddles they have done to their models as many of the typical mistakes people make are not included in reported results.
ReplyDeleteI did read a great paper the other day that said something along the lines of "methods seem scary but they are really quite easy when you compare it with the difficulties of knowing what good data is and then collecting it". I'll have to find that paper and post it here.
Yep the whole taking the technology away from the masses thing was going though my head as I wrote this. Not sure what to make of it really. One the one hand people should have access, on the other so many people do such a bad job and so often they are completely unaware about the detrimental mistakes they are making.
ReplyDeleteYeah it's basically a job for better education, I think, especially in the social sciences. I remember my psychology stats labs were just students sitting in front of a computer going through the steps written on a piece of paper to figure out an ANOVA result or whatever. From what I can tell, at least half had no idea what they were doing and why they were doing it.
I had one tutor who was really good at explaining what everything meant, and another that looked like she was there to run us through what to do and then she was off to lunch or something. So my knowledge is patchy.
I did read a great paper the other day that said something along the lines of "methods seem scary but they are really quite easy when you compare it with the difficulties of knowing what good data is and then collecting it". I'll have to find that paper and post it here.
That sounds quite interesting. There's a paper in the Journal of the Experimental Analysis of Behavior that talks about developing and presenting mathematical models in behavior analysis. It's only fairly recently that complicated mathematical models have been used in the field so this guy (Mazur) just goes through some simple guidelines for what parts of the model you should expand on and a few tips on how to present it in a way that can be understood by most people.