As a consumer this is slightly annoying. When I see a "user-posted" review on a website, I like to know that it's actually what the user posted. Things like typos and grammatical information are useful bits of information to help me decide how reliable the reviewer is.
DaVinci wrote without using capitals or punctuation.
Drew Houston wrote his Dropbox apology without using capitals.
And grammar... Well, grammar sometimes be subjective, y'dig?
It took me a long time to set aside my prejudice for people who had trouble writing perfectly. But once I did, and once I actually listened to what they were saying, I was often surprised to discover that those people turned out to be smarter than I was. Or more experienced. Or had some unique insight that I hadn't considered before.
Of course, Occam's Razor suggests that the simple explanation is I'm "just another dummy".
Or could it be that form without substance is rubbish?
I'd be interested in seeing how they partitioned up these hits. I would assume that one Turker is performing the fixes, while another varies?
To me these n-stage Mechanical Turk tasks are much more interesting than any classical survey tasks that you typical see on MTurk. You can achieve some amazingly creative results if you have one Turker make a change and then allow other workers to make subsequent changes, iterating over the previous versions.
Effective crowdsourced, micro-iterative design may be a great way to bootstrap a startup with user generated content.