This one isn't surprising at all: it doesn't deal with letters, but symbols. It only indirectly knows about spelling.
> In contrast, its ability to explain grammar is terrible. It misidentifies parts of speech, singulars and plurals, the subjects and objects of verbs,
I wonder if this is an area where having to generate word-by-word an immediate final answer immediately is killing it, and if it could be induced to "diagram" a sentence first and get a correct answer. Can you give me an example of the query you're asking?
> This one isn't surprising at all: it doesn't deal with letters, but symbols.
It’s not surprising to us now. It was very surprising to me when I first noticed it, as it contrasted sharply with ChatGPT’s ability to explain aspects of language that seem to us humans as being much more difficult, particularly word meaning.
> Can you give me an example of the query you're asking?
I have to admit that all of my testing of its grammar-explaining ability was done last December. I have repeated a few of those tests now with GPT-4, and it did fine [1].
>I have to admit that all of my testing of its grammar-explaining ability was done last December. I have repeated a few of those tests now with GPT-4, and it did fine [1].
A tale as old as time in this space! I appreciate you checking it again. They are improving so fast.
This one isn't surprising at all: it doesn't deal with letters, but symbols. It only indirectly knows about spelling.
> In contrast, its ability to explain grammar is terrible. It misidentifies parts of speech, singulars and plurals, the subjects and objects of verbs,
I wonder if this is an area where having to generate word-by-word an immediate final answer immediately is killing it, and if it could be induced to "diagram" a sentence first and get a correct answer. Can you give me an example of the query you're asking?