Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine solving a tough math problem. Are you generating text, or doing something else entirely?


I just asked Chat GPT 3.5 the following: "Please give an exponential equation that has integer solutions which is difficult to solve.", and the response is: "

Certainly, here's an exponential equation that can be challenging to solve for integer solutions:

2^x+3^y=7^z

This equation involves three variables, x, y, and z, and requires finding integer values for these variables that satisfy the equation. This type of equation is known as a Diophantine equation, and solving it can be quite challenging, especially for larger values of x, y, and z.

"


That's quite the nerd snipe.

I mean, it's definitely not a Diophantine equation and solving it is definitely not challenging -- (2, 1, 1) happens to be an easy solution -- but I want to say it probably doesn't have any other solutions but I don't see a great way to prove it...


Can it reason about a math problem that no human ever thought about and solve it ?


Reasoning about a math problem that no human has ever thought about and solving it is the bar for understanding now ? Can you do that ?


If you consider mathematical symbols text then I would indeed be generating text.


Sometimes I solve maths problems by visualising: perhaps I want to differentiate something, so I imagine the graph of the function, and then the slope of the graph, and then try to compare that to various simple functions that I could give as a guess of the answer, which I only then feed into a symbolic process of "assume this is true and work backwards".


an LLM trained on just othello moves will reconstruct a board state from the sequence of moves to aid in prediction of the next move. You have no idea what an LLM is or is not doing to predict. Prediction is just the objective. Don't confuse that for the process.


> You have no idea what an LLM is or is not doing to predict.

Indeed, but I'm not trying to with that comment, which is just about how my own seems to work on self-reflection. That said, I do have reason to doubt the accuracy of human introspection of our own thought processes, and therefore my own judgment in this may also be flawed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: