I thought the explicit goal of AI was to create systems that can do tasks that typically require human intelligence. That includes beneficial things like finding cures for diseases, technology innovation, etc … Wouldn’t it be a shame to limit this growth potential to protect friggin’ YouTubers?
Maybe go after the application, not the technology? Someone uses AI to explicitly plagiarize an artist’s content? Sure, go ahead & sue! But limiting the growth potential of a whole class of technology seems like a bad idea, a really bad idea actually if your military enemy had made that same technology a top priority for the next years …
Maybe go after the application, not the technology? Someone uses AI to explicitly plagiarize an artist’s content? Sure, go ahead & sue! But limiting the growth potential of a whole class of technology seems like a bad idea, a really bad idea actually if your military enemy had made that same technology a top priority for the next years …