Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems pretty simple to correct, so I'm skeptical that nobody has done so yet in these experiments. If true, it's an equally interesting oversight as the Monty Hall problem. The basic premise is that the structure of an experiment will naturally nudge randomness in a particular direction, and we need to adjust for that in the analysis. Everyone who does this type of work should know this.

In a simplified experiment where we give people a 3 question quiz, those who got 2 questions right have one overestimation option, 3, and two underestimation options, 0 and 1. So it's very easy to adjust for autocorrelation by checking if a large group of 2-scorers underestimate more than twice as often as they overestimate. Then we see how their tendencies compare against 1-scorers and how they deviate from naturally overestimating more than twice as often as underestimating.

I haven't reviewed these types of papers, but if nobody made even that basic adjustment in their analysis, how many others have been missed in experiments like this?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: