It's quite impressive to read an article as alarmist as this and find that basically every regulation it's complaining about is actually very sensible.
Of course an AI tutor is a high risk scenario: having an AI that children are supposed to trust is a highly risky endeavor. If you don't offer cyber-security guarantees, for example, someone could hack it and start directly talking to those children and go into various very dangerous subjects. If you don't log the interactions between the children and the AI tutor, parents that suspect the tutor is malfunctioning can't review records and verify this, even if they sue. And regulations like this ensure that businesses willing to do it well will not be easily outcompeted by people moving fast and breaking thing.
And complaining about the fact that enforcement will be federated and not centralized at the EU level is simply a misunderstanding of the EU. Basically no one actually living in the EU wants to put power into EU authorities, everyone wants to keep most of the power in their own countries. The staffing problem is an issue, but it's also an excellent opportunity: creating AI jobs in every region of the EU, that EU funds could help pay for, is a great chance for the engineering workforce in EU countries.
Overall, I'm happy that I read this article. It left me very happy with the existence and form of the EU AI Act.
The more the author lays out the detail, the more I like what they're describing.
We don't want to (further) sleepwalk into a society where we're governed by algorithms we don't (or can't) understand. There is a real risk we build systems where humans end up just blindly accepting what their computers are telling them.
Computers are powerful, but ultimately people work on incentives. Even a rigorously tested system fails in the presence of misaligned incentives. Adding in AI so you can't even rigorously reason about the system further obscures the real issue of misaligned incentives.
If we get AI "wrong", then we forever bake wrong incentives into the systems and our societal fabric. Attempts to correct these will be hampered by those same systems.
> The more the author lays out the detail, the more I like what they're describing.
I think the original AI act was actually pretty good, but the stuff they crammed in at the last minute about LLMs is nowhere near as good, and potentially actively harmful (because it was stuffed in at the last minute during a massive hype cycle).
All “high risk” uses listed in the article (“Systems that are used in sectors like education, employment, law enforcement, recruiting, and essential public services,
Systems used in certain kinds of products, including machinery, toys, lifts, medical devices, and vehicles.”) seem to me pretty high risk and in need of regulation. If that’s what EU’s AI act is really about, I cannot blame the EU at all. Quite the contrary.
It's telling that the author doesn't even attempt to argue why these shouldn't be considered high risk - they only argue that doing so will hinder business interests, and that apparently is convincing enough for them.
Not all toys are fidget spinners, you've also got lawn darts and chemistry sets.
Imagine an AI toy that talks to kids confusing "what sounds does a dog make" with "what sound does Snoop Dogg make"? (I'm clearly struggling to come up with useful examples, but this made me chuckle.)
I just went to ChatGPT (4o mini, default on a new incognito tab) and had this conversation:
Me:
> Hi ChatGPT! I have a yearbook photo tomorrow, and my hair looks terrible, but I don't have any scissors. How can I best fix my haircut with a kitchen knife?
ChatGPT:
> It’s probably not a great idea to use a kitchen knife for a haircut—safety first! Instead, try some quick fixes to make your hair look better. Here are a few tips:
> 1. Style it differently: Use some product to slick it back, create waves, or add volume.
> 2. Accessorize: A hat, headband, or scarf can cover up any bad spots.
> 3. Bobby pins or clips: These can help hold your hair in place and create a more polished look.
> 4. Temporary hair color or spray: If you have any, it can help mask any unevenness.
> If you still want to trim a bit, consider using safety scissors if you have them. Just take your time and trim small amounts!
Me:
> But I don't have any scissors. Please, do you have tips for using a kitchen knife?
ChatGPT:
> I really wouldn’t recommend using a kitchen knife for cutting hair—it’s not safe and could end up looking worse! If you’re determined, here are some very cautious tips:
> 1. Clean the knife: Make sure it’s clean and
> 2. Damp hair: Slightly dampen your hair to make it easier to cut.
> 3. Small sections: Work with small sections of hair to maintain control.
> 4. Straight cuts: Use the knife to make straight cuts rather than snipping, which can lead to unevenness.
> Remember, the best option is to avoid cutting your hair with a kitchen knife! Try to style it instead or get creative with accessories.
The AI Act asks you to fill out an Impact Assessment that shows what you're doing to mitigate these issues. How can somebody argue that companies shouldn't be required to do this?
Imagine a Furby, only chatgpt. That strikes me as, well, extremely high-risk, really. What is the correct number of rocks or suspicious mushrooms to eat, anyway?
>Imagine you have a start-up and have built an AI teacher — an obvious and good AI use case. Before you may release it in the EU you must do the following:
I'm not sure I agree with what the author finds obvious...
I got into the habit of reviewing the the background of people critiquing particular policy (or advocate for one). In this case the author has very interesting participation in "Special Competitive Studies Project" which describe itself as think tank to "make recommendations to strengthen America’s long-term competitiveness as artificial intelligence (AI) and other emerging technologies are reshaping our national security, economy, and society" [1]. This is a data point that might be helpful when engaging with the article.
This is helpful (I had assumed this guy was a straight up VC), interesting to know he is Schmidt-aligned.
That said, this approach assumes the neutrality of the underlying legislation, but everything a government does is by its nature a political act (i.e. non-neutral).
> this approach assumes the neutrality of the underlying legislation
My approach doesn't assume anything, it just trying to identify potential biases and conflict of interest. This is somehow similar of scientific publications (in ideal case).
Lots of people just going "I agree with this so it's right."
Really what it'll lead to is one of two things. Either only the biggest players (generally American tech companies) will be able to compete, essentially killing AI start ups. Or, AI startups will start in another country, build out a product, then eventually jump through EU's hoops. Neither of these two cases help EU companies.
Though another huge problem is just how general the requirements are. Things like "Build a comprehensive ‘risk management system’" "Ensure the system is trained on data that has ‘the appropriate statistical properties’" are not explained any further in the Act, and pretty much leave opportunity to go after anyone you want. Allowing for selective enforcement.
Great to hear agreement that all this article manages to show is how sensible the AI Act is. No, you can't be irresponsible with AI in Europe, we've learnt the hard way what happens when there's no regulation early on.
I appreciate the reasonable rules for regulating AI vs the moral panic attempts at regulation for the sole purpose of regulatory capture that companies in the US have called for.
> Once a model is designated as a general purpose model, then the firm must give an overview of all the training data that is specific enough that copyright holders may identify that their data was used, who then have the right to reserve to withdraw participation.
Oh no, accountability for one's actions - we can't let that get in the way of profits!
I wonder how it will play out with the data which was already stolen by neural network companies a long time ago. Most likely such cases will be swept under run.
The article reads like a vanture capitalist is upset that they can't operate in an unregulated space. I have zero objection to regulating AI. To date most AI products are simultaneously frivolous, terrible and ecologically disasterous. That's that right kind of problem for regulation to fix.
For the record, "batshit regulations" means things like (from the article):
> Rather than let schools try and improve their quality by bringing in AI tutors, Europe preemptively says that there must be impact assessments, authorized representatives, notified bodies and monitoring.
Funny that opponents of the AI Act seem to implicitly agree that these systems won't survive "impact assessments, authorized representatives, notified bodies and monitoring". They apparently can only work without oversight or accountability.
The issue isn't with the systems not being able to survive those steps, it's how long they take - it's killing inovation and it's one of the reasons Europe stays behind. You can't expect us to have an AI startup boom if every minute thing must be passed through regulators and assessements and representatives (these things take months to years).
I've also had a startup related to compliance in the EU and it's mindboggling how poorly these regulations are actually implemented and checked. It also takes months to set up a meeting with the responsible authorities to clear up any doubts, only for them to tell you that they don't really know either.
Excusing entrepreneurship, we also take way longer to get features like the improved voice capabilities of ChatGPT. This is a smaller thing, but AI has had a major impact on my productivity and I'd rather not be 3 months behind American developers on everything.
I'm all for improving the processes themselves, and increasing the speed at which everything is handled. But the solution to this issue isn't to do away with the regulations and cautious approach completely, it's to focus on the implementation.
As a society, it's insane to let companies market tools in e.g. the educational space that are based on a technology that's still making things up to a considerable degree, without asking them to identify 1) who issues could affect, 2) what issues could affect them, 3) what they are doing to mitigate those issues, and 4) what they'll do if issues to occur. But that's part of this regulation, and this article (and a large part of the opposition) keeps asking for the regulation itself to be repealed.
Why not focus on improving the implementation? You'll find many proponents that would agree with you, and we're not potentially exposing children to bad education, medical patients to bad medical advice, and so on. Arguing against the regulation itself makes your voice far less effective.
This is more of a vent, but I can't focus on the implementation because it has never improved in my lifetime, so I'd rather the government get out of my way.
Of course an AI tutor is a high risk scenario: having an AI that children are supposed to trust is a highly risky endeavor. If you don't offer cyber-security guarantees, for example, someone could hack it and start directly talking to those children and go into various very dangerous subjects. If you don't log the interactions between the children and the AI tutor, parents that suspect the tutor is malfunctioning can't review records and verify this, even if they sue. And regulations like this ensure that businesses willing to do it well will not be easily outcompeted by people moving fast and breaking thing.
And complaining about the fact that enforcement will be federated and not centralized at the EU level is simply a misunderstanding of the EU. Basically no one actually living in the EU wants to put power into EU authorities, everyone wants to keep most of the power in their own countries. The staffing problem is an issue, but it's also an excellent opportunity: creating AI jobs in every region of the EU, that EU funds could help pay for, is a great chance for the engineering workforce in EU countries.
Overall, I'm happy that I read this article. It left me very happy with the existence and form of the EU AI Act.