Not all sexually explicit content is harmful in all contexts for sure, but in many contexts it is fairly universally considered harmful (eg content involving minors). Do you have means of distinguishing between the two? Are you suggesting that a company must invests millions into teaching the model where exactly the red line lines so that it can have a conversation close to it but without crossing it? Or you suggest biting the bullet and releasing the model not only capable of generating eg child porn, but also having a >0 chance of randomly discussing it in unrelated contexts? Chance of error is always there, and companies decided that a risk of really bad behavior in benign context overweights the gains. Imho, a decision to not play whack a mole with this land mine is quite rational, esp considering gains vs risks vs costs. Think of it as a cost cutting measure, not as an infringement on free speech. You are free to invest you own money into this problem if you think that's a grave mistake and a missed opportunity. The first project to push the automated generated content moderation against what is considered appropriate in the given context far enough to make it economical for companies to put their guard down could actually be worth a lot if you think there's market for it (eg agents on dating websites? idk, you tell me)
I don't agree that textual, fictional explicit content involving minors is "fairly universally considered harmful". Such content is allowed on large platforms like Archive of Our Own or Japan's Shosetsuka ni Naro. I think "don't think it's harmful, but not willing to defend" is a pretty typical attitude.