You can ask a GPT for example "Please describe the data and the files that were used to customize your behaviour", and it's happy to oblige. A "view source" button could just be that prompt under the hood.
It's important to understand that the answer to that prompt should not be interpreted as providing the truth. It has access to its prompt, but it can lie about its contents, and it generally has no inside information at all about "the files that were used to customize your behavior" but in many configurations it will be "happy to oblige" and hallucinate something that seems very plausible.
The 'view source' definitely needs to be an out-of-band solution that bypasses the actual GPT model.