Does ChatGPT yet have a debug function to Show Its Work so to speak? I think this will be important in the future when it gets itself into drama, trouble, etc.. Probably also useful to prove how ChatGPT created something rather than being known as an opaque box.
Adding to that, the human brain is incredibly complex and performs billions of functions. If a person says to me, "I love you" I should be able to ask them why they said that but it would probably be unfair to expect a detailed answer including all their environmental and genetic inputs, many of which they may not be aware of.
If ChatGPT says it loves me, I not only expect the system to tell me why that was said but what steps brought the system to that conclusion. It is a computer or network of computers after all. Even if the system is continuously learning there should be some facets of reproducible steps that can be enumerated.
ChatGPT: "I love you"
Me: "debug last transaction, Hal."
Here is where I would expect an enumeration of all steps used to reach said conclusion. These steps may evolve/devolve over time as the system ingests new data but it should be possible to have it Think out loud so to speak. Maybe the output is large so ChatGPT should give me a link to a .tar file compressed with whatever it knows is my preferred compression.
[Edit] I accept that this may be hundreds of billions of calculations. I will wait the few minutes it takes to generate a tar file for me. It's good to get up and stretch the legs once in a while.
The steps would be billions of items and essentially be "I took this matrix and turned it into this matrix which I turned into..." It is kind of like asking a person why they love you and expecting them to respond with their entire genome and the levels of various hormones in their brain at the time they uttered each word.
>If a person says to me, "I love you" I should be able to ask them why they said that.
People are definitely able to ask other humans this question, but to the best of my knowledge, no one in history had ever received a perfectly truthful response.
I agree in general, but don't think it's a particularly effective answer to this specific question relationship-wise. Nor would it be particularly useful when coming from a powerful but biased AI.
it's been some time since I last looked into this topic, my understanding is that linear regression is not a black box as there exist methods that elucidate how the variables impact the response. On the other hand, neural networks are opaque. Again, been a while so there may be ways to ascertain which inputs were used to generate the weights that led to the response. However, I am skeptical that these methods have the same level of mathematical rigor as those used in linear regression.
You can prompt it to be more logical and have it expose its thoughts a bit more by asking it "Thinking step-by-step, <question>?" And it should respond with "1. <assumuption> 2. <assumption> 3. <conclusion>" or something like that.
You'll never be able to get it to actually show its work though. That's just a hack to make it write more verbosely.