Because Apple won’t give you access to what you need as a dev for this kind of thing on iPhone: always-on audio listening to multiple streams : ambient sound, my voice, whatever is playing in my headphones … think an AI assistant listens to audiobooks together with you and allows you to ask questions / lookup things etc …
Does the Bavarian power grid also routinely go down because they don't invest in it and at the same time also refuse to add interconnects with other regions?
Not sure, but I think the grid integration is international with significant connections to e.g. France, rather than federal states within DE having their own thing?
More that the stereotypes people have of Germany/USA are really of Bavaria/Texas (beef, guns, lederhosen), more (small c) conservative, and there are occasional mumbles and grumbles of independence but they don't have any substance to them.
At least, that's what it seems. My grasp of German means there's a game of telephone between reality and my understanding of Bavaria, and I've never even been to Texas so I'm judging them by what people say online.
Arguably what they win in input training cost they may lose in output censorship cost? But then again the line between Western “Guardrails” and Eastern “Censorship” isn’t all that clear.
America has also committed massacres, like over throwing governments of foreign nations.
No one in my family is at risk from either of those statements. And there is no automated system to stop them. That’s the difference. Just because people decided that massive platforms should limit hate-speech doesn’t mean the west is performing censorship of a comparable level. Not even close.
But what about “assisted by” AI? Plenty of people use LLMs to enhance their writing abilities, like, say, ‘90s era grammar & spell check. Plenty of AI users are sophisticated enough to understand that dumping pure AI-Gen content is a bad idea. And what’s wrong with AI-enhanced speach?
Worse, OpenAI LLM pathologies are creeping into text written by actual humans because people are seeing so much garbage written by it that they're adopting its behavior.
Turns out that there is more than one kind of learning machine in play online and both can pick up the bad behavior of the other.
That's nothing new. Actual humans have been writing businessy LinkedIn posts this way long before GPT-3 came out. I'd even say such posts are even more awful than what GPT produces by default.
“Linked their identities to X” is the key behavior to understand here. A lot of creative people have linked their identities to activities that are low on the food chain for AI. The close-minded creatives will resist AI because it threatens the identities they built for themselves, the open-minded ones will embrace AI as a tool to increase their creativity.
I thought the explicit goal of AI was to create systems that can do tasks that typically require human intelligence. That includes beneficial things like finding cures for diseases, technology innovation, etc … Wouldn’t it be a shame to limit this growth potential to protect friggin’ YouTubers?
Maybe go after the application, not the technology? Someone uses AI to explicitly plagiarize an artist’s content? Sure, go ahead & sue! But limiting the growth potential of a whole class of technology seems like a bad idea, a really bad idea actually if your military enemy had made that same technology a top priority for the next years …
This is going onto be extremely unpopular, but I think an argument could be made that we need to give AI research an exemption from copyright enforcement until we have a significant lead on China in AI development. They sure as hell are not going to obey Western copyright law in training their models.
… and this is how you lose an AI war to an enemy who has been ignoring copyright laws for decades. Unless it’s clear that “more data” does NOT produce faster progress it’s just safer to anllow indiscriminate scraping.
reply