Criticism feels harsh. Of course models don't know what they don't know. Reporters can have the same biases. They could have worded it better "lowers the probability of hallucinating", but it is correct it helps to guard against it. It's just that it's not a binary thing.
Alas, current LLM prompting has a lot of hacks. Half of them are useless, of course, while the other half are critical for success. The trick is: which one is which?