Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> get that shit outta there because we can’t own something we stole from someone else

How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.



It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.

The scenario looks like this:

* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.

* See someone else using your plot/characters/setting in their own for-profit works.

* Try suing that someone else for copyright infringement.

* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]

[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...


This scenario only plays out if it is known what was or wasn't made with GenAI.


It would become known during discovery.


How can you find out if an AI created something versus a human with a pixel editor?


In a legal case? You question the authors under oath, subpoena communications records, billing records, etc.

If there's even a hint that you used AI output in the work and you failed to disclose it to the US Copyright Office, they can cancel your registration.


You need to say you improved on the work of AI and it's yours.

Now you can sue


Are you kidding me ? Everyone knows it's pirated content (aka stealing), there are a ton of proofs here and there:

- https://arstechnica.com/tech-policy/2025/02/meta-torrented-o... - https://news.bloomberglaw.com/ip-law/openai-risks-billions-a...

Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)


Any one with a brain knows it is not stolen, but nevertheless the fact that people will claim so is a risk.


It is stolen on a cultural level at least.

But since many of these models will blurt out very obviously infringing material without targeted prompting, it’s also an active, continuous thief.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: