I use all of the current versions of ChatGPT, Gemini, and Claude.
The hallucination rates are about the same as far as I can tell. It depends mostly on how niche the area is, not which model. They do seem to train on somewhat different sets of academic sources, so it's good to use them all.
I'm not talking about deep research or advanced thinking modes -- those are great for some tasks but don't really add anything when you're just looking for all the sources on a subject, as opposed to a research report.
OpenAI has published a great deal of information about hallucination rates, as have the other major LLM providers.
You can't just give one single global hallucination rate since the rates depend on the different use cases and despite the abundant amount of information available to people on how to pick the appropriate tool for a given task, it seems very few people care to take the time to actually first recognize that these LLMs are tools, and that you do need to learn how to use these tools in order to be productive with them.