Yeah, JSON mode in Ollama, which isn’t even the full llama.cpp grammar functionality, performs better than OpenAI for me at this point. I don’t understand how they can be raking in billions of dollars and can’t even get this basic stuff right.
I don’t know what point you’re trying to make. They also return JSON more consistently than gpt-4, but I don’t use that because it’s overkill and expensive for my text extraction tasks.
I mean, sure, but the parent should also just explicitly state what it is they were asking or claiming. I’ve answered every question asked. Making vague declarations about something not being “the benchmark,” while not stating what you think “the benchmark” should be, is unhelpful.