This does not make sense to me. ChatGPT is completely nerfed to the point where it's either been conditioned or trained to provide absolutely zero concrete responses to anything. All it does is provide the most baseline, generic possible response followed by some throwaway recommendation to seek the advice of an actual expert.
The way to get around this is to have it "quote" or at least try to quote from input documents. Which is why RAG became so popular. Sure, it won't right you a contract, but it will read one back to you if you've provided one in your prompt.
In my experience, this does not get you close to what the top-level comment is describing. But it gets around the "nerfing" you describe
It's very easy to get ChatGPT to provide legal advice based on information fed in the prompt. OpenAI is not censoring legal advice anywhere near as hard as they are censoring politics or naughty talk.
That's because its just advice, not legal advice. Legal advice is something you get from an attorney that represents you. It creates a liability relationship between you and the attorney for the legal advice they did provide.
Sure, we can call it "advice" instead of "legal advice" or we can even call it other names, like "potato", if that's what you want. My point is that potato not censored.