Do they? All of the reporting I’ve read says that they’re a lot better at preventing this, so the popular choices before Grok were things like Civitai, not Google or OpenAI, because X explicitly embraced “spicy” mode as a growth strategy:
“Better at preventing” \neq “can’t”. The hoopla you are seeing here is textbook selective enforcement. There’s also the kafkatrap of showing that a model can do this makes you technically criminally liable for possession of CSAM, which I’m sure will be enforced against the journalists who demonstrated it as rigorously as they want it to be enforced against Twitter.
That's not really relevant. Generally with things that have both good and bad uses you can't reasonably prevent all the bad uses. You can just put in safeguards to try to reduce them.
> The hoopla you are seeing here is textbook selective enforcement
This is true in the same way that American gun nuts say shootings happen in other countries, hoping you won’t check absolute numbers. The fact that the large groups which specialize in creating this content favor Grok over the other major players strongly suggests the complaint is valid.
> “Better at preventing” \neq “can’t”. The hoopla you are seeing here is textbook selective enforcement.
When the prevention is sufficient that the politicians and law enforcement don't realise a crime has occurred, it's no more selective than policing being limited to where the police and CCTV cameras actually are.
Given both (1) the scale of the web means it can only be searched by the same kind of AI that would also be used by a more respectable GenAI supplier to prevent unlawful output in the first place; and also given that (2) Gemini and ChatGPT image generation are not as heavily tied to a social network as Grok is to Twitter, it may simply be that nobody has any evidence of wrongdoing by other specifically identifiable GenAI image providers, at most it will be "we know this image was GenAI, but don't know if this was an online service or local, nor if money changed hands for this".
X makes it easy to know that X is to blame. Sora probably could have had this drama if they didn't filter out stuff like this, but they do filter well enough.