To a certain extent, of course. The 2% was based on the assumption that if you are benchmarking against re-typing, you expect the same kind of quality you'd get from having a good typist re-typing the documents.
From my own experiments, I tend to find that you can read through and correct errors only relatively marginally faster than you can type because you either follow along with the cursor or need to be able to position the cursor very quickly when you find an error, and as the error rate increases, trying to position the cursor to each error very quickly gets too slow.
Dropping accuracy in your effort to correct the text doesn't really seem to speed things up much. You likely speed it up if you're willing to assume that anything that passes the spellchecker is ok (but it won't be, especially as modern OCR's often try to rely on data about sequences of letters, or dictionaries, when they're uncertain about characters)
If you're ok with lower accuracy, e.g. for search, and the alternative is not processing the document at all, then it'd be drastically different.
Doesn't that depend entirely on what you're using the text for and how accurate it needs to be?