Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ideally no, but there are established norms and unwritten rules. Plus, a mechanism was built to communicate the limits. These norms were working for decades.

The fences were reasonable because the demands were reasonable and both sides understood why they are there and respected these borders.

This peace has been broken, norms are thrown away and people who did this cheered for what they did. Now, the people are fighting back. People were silent because the system was working.

It was akin to mark some doors "authorized personnel only" but leaving them unlocked. People and programs respected these stickers. Now there are people and programs who don't, so people started to reinforce these doors.

It doesn't matter what you prefer. The apples are spoiled now. There's no turning back. The days of peace and harmony are over, thanks to "move fast break things. We're doing something amazing anyway, and we don't no permission!" people. If your use is benign but my filter is preventing that use, you should get mad at the parties who caused this fence to appear. It's not my fault to put a fence to protect myself.

To see the current state of affairs, see this list [0]. I'm very sensitive to ethical issues about training your model with my data without my consent, and selling it to earn monies.

I don't care about how you stretch fair-use. The moment you earn money from your model, it's not fair-use anymore [1].

[0]: https://notes.bayindirh.io/notes/Lists/Discussions+about+Art...

[1]: https://news.ycombinator.com/item?id=39188979



Well, what'll happen for the most part is not users being mad, but a general migration to fenceless areas. Prompts will be for "content similar to X" and the bots will merely use what it has access to, rendering the fences moot. And there will always be authors who don't mind their content being monitized or utilized by AI.


> but a general migration to fenceless areas.

This is the ultimate goal already. We want every netizen (human or machine) to obey the written and unwritten rules and be a good netizen.

> Prompts will be for "content similar to X" and the bots will merely use what it has access to, rendering the fences moot.

Absolutely not. I don't want my content to end in an LLM, period. I don't license it that way, and I don't consent. Humans are always welcome to read it though.

An LLM is an hallucinating parrot anyway, so I don't want my words to be used in that LSD fueled computing chaos.

> And there will always be authors who don't mind their content being monitized or utilized by AI.

Yes, and there will always be authors who do mind their content being monitized [sic] or utilized by AI.

This is life.


> good netizen

The rules dictating what this means change over time as context changes. And this is definitely one or the largest shifts since the advent of the www.

> Absolutely not.

OK, but it seems in that case you'll have to delist from search engines and any other place where an LLM may get a preview of what a link that a human gave it is about. And there's likely a spiral here, as as you remove content references from more access points, fewer humans will come across them leading to fewer LLM-based queries, so you'll definitely get your wish. Just not the way you envisioned.

The LLM may hallucinate at times, but many including myself find the pros outweigh the cons, and will increasingly gravitate toward using such services for an increasing variety of tasks. This is the evolving state of the internet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: