Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When the goal is "harassing someone into depression and suicide" though, the incentive will never go away. People are going to start doing things like this to be deliberately malicious, sending videos of their dead parents saying horrible things about them and so on.

The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.



> The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.

Hard disagree. Technology is a tool. Tools don’t do anything without a human in the loop, even if to initially run an autonomous system.

Even technology like guns and nukes are not inherently bad. Can we be trusted to behave ourselves with such tools? I would argue “no”.


If there were a box you could buy that let you think of someone and then push a button and that person died, you could argue all you want that "the technology is just a tool, it's how you use it that matters" but it would be difficult to argue that having that technology out in the world for anyone to use would be a net benefit to society.

The fact that we cannot trust humanity to behave with such tools means that the tool (or maybe the tool being accessible) is the problem.


This ’technology isn't evil’ tale is as old as time…

Technologies may not seem inherently bad, but it they tend to be used in bad ways, the difference is minimal.

Deepfakes have practically no positive use and plenty of potential for abuse. They're not a neutral technology. They're a negative social outcome from what might be a neutral technology (ML).


Tech tends to be used in both good and bad way. I'll give you some tend to be bad eg. nerve gas and some good, penicillin. Deepfake stuff seems mostly to be used for entertainment.


Deepfake stuff, now that it's trivial to produce, is going to be used for an infinite amount of harassment, constant "look here's a video of AOC kicking a dog" posts, etc.

The problem is that the potential for truly positive stuff is minimal and the potential for truly awful stuff is unlimited. That's even ignoring the fact that the massive energy and water costs of these technologies is going to be a massive ecological disaster if we don't get it under control - and we won't, because the billionaires running these systems donate millions to the politicians voting against these protections.


If the potential for positivity is minimal compared to the potential for harm, it's not a socially neutral technology. I get downvoted on this opinion, but it's my hill. Technology might be neutral, but it's the applications that matter.

Ban deepfakes.


I'm guessing it's mostly used for porn. And even for entertainment value: it's not something we really needed on the whole, we have enough entertainment. Deepfakes have no place in a sophisticated, evolved society.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: