Oh definitely. There's bound to be improvements especially when you glue an LLM to a semantic engine, etc.
The issue is again, fundamentally, one of data. Without authenticating what's machine generated and what's "trusted" proliferation of AI generated content is bound to reduce data quality. This is a side effect of these models being trained to fool discriminators.
Ultimately now I think there is going to be a more serious look around the ethics of using these models and putting guard rails around what exactly is permissible. I suspect the US will remain a wild west for some time but the EU will be a test-bed.
Ultimately, I'm fairly excited about the applications of all this.
The issue is again, fundamentally, one of data. Without authenticating what's machine generated and what's "trusted" proliferation of AI generated content is bound to reduce data quality. This is a side effect of these models being trained to fool discriminators.
Ultimately now I think there is going to be a more serious look around the ethics of using these models and putting guard rails around what exactly is permissible. I suspect the US will remain a wild west for some time but the EU will be a test-bed.
Ultimately, I'm fairly excited about the applications of all this.