Hacker Newsnew | past | comments | ask | show | jobs | submit | karimf's commentslogin

> Projects hosted on Vercel benefit from platform-level protections that already block malicious request patterns associated with this issue.

https://vercel.com/changelog/cve-2025-55182

> Cloudflare WAF proactively protects against React vulnerability

https://blog.cloudflare.com/waf-rules-react-vulnerability/


We collaborated with many industry partners to proactively deploy mitigations due to the severity of the issue.

We still strongly recommend everyone to upgrade their Next, React, and other React meta-frameworks (peer)dependencies immediately.


Does this include any provider that does not fall under USA CLOUD Act? This vulnerability disclosure timeline is a nightmare for us Europeans, it was fully disclosed yesterday late afternoon for us and I can trace back attack logs that happend during the night. I expect some downfalls from this.

I genuinely believe Next.JS is a great framework, but as an European developer working on software that should not touch anything related to CLOUD Act you're just telling me that Next.JS and React, despite being OSS, is not made for me anymore.


It’s infuriating how US-centric some OSS maintainers can be. Really sad if the OOS ecosystem also have to fragment into pieces like much of the internet is starting to.

Does AWS WAF have a mitigation in place?

Yes, AWS WAF rule is in AWSManagedRulesKnownBadInputsRuleSet https://aws.amazon.com/security/security-bulletins/rss/aws-2...


I patched and rebuilt what I could and added custom Crowdsec WAF rules for this, in case I missed something.

When I saw "WIZ Research - Critical Vulnerabilities in React and Next.js" on the big image banner, I immediately thought that Wiz found the vulnerability.

When Reuters has an article that says "Reuters Business - Interest rates going up", do you think Reuters made the interest rates go up themselves or that they are reporting on the interest rates?

Reuters isn’t a bank. Wiz is a security company so they have a greater responsibility to distinguish between their own original work and discoveries made by other researchers.

They do that by saying "we discovered this" when they discover it.

Dang, Cloudflare is moving fast. Cloudflare WAF proactively protects against React vulnerability https://blog.cloudflare.com/waf-rules-react-vulnerability/

This is what coordinated disclosure looks like.

Given that most Next.js and RSC apps run on Vercel, I’m wondering if they’re doing the same thing. There’s no information about this in their latest blog post [0].

Update: They do similar thing. Mentioned here [1]

[0] https://nextjs.org/blog/CVE-2025-66478

[1] https://vercel.com/changelog/cve-2025-55182


Would be interesting to hear from Cloudflare the extent of exploitation before today. I'm assuming they can see if/when this started being exploited.


Comments moved thither. Thanks!


Same issue, this is just a dupe


It's hard not to use Cloudflare at least for me: good products, "free" for small projects, and if Cloudflare is down no one will blame you since the internet is down.


> if Cloudflare is down no one will blame you since the internet is down.

That is true. it is also the problem. It means the biggest providers do not even need to bother to be reliable because everyone will use them anyway.


Well, no. If they are unreliable to the point of being an outlier when compared to the alternatives then people will switch. At this stage they’re not an outlier.


Maybe not, but they are approaching it. I wouldn't use it for anything funded with my own cash, I no longer recommend it as a first choice, but I'm not suggesting it gets replaced yet. It's somewhat in the 'legacy tech' category now in terms of how I perceive it and deal with it.


They are often promoted as bing more reliable.



> if Cloudflare is down no one will blame you since the internet is down.

But this is not really the case. When Azure/AWS were down, same as this one with Cloudflare: significant amount of web was down but most of it was not. It just makes more obvious which provider you use.


I've been migrating all my personal stuff to Cloudflare. They have good products and good pricing.

At the same time I'm worried about how the internet is becoming even more centralized, which goes against how it was originally designed.


Same here. A lot of my sites are now down.


[flagged]


No, just competing priorities.


Self-hosting a free real-time AI app to help people practice speaking English

https://www.fikrikarim.com/bule-ai-initial-release


I've always thought about the best way to contribute to humanity: number of people you help x how much you help them. I think what Karpathy is doing is one of the highest leverage ways to achieve that.

Our current world is build on top of open source projects. This is possible because there are a lot of free resources to learn to code so anyone from anywhere in the world can learn and make a great piece of software.

I just hope the same will happen with the AI/LLM wave.


This free tradition in software is I think one of the things that I love so much, but I don't see how it can continue with LLMs due to the extremely high training costs and the powerful hardware required for inference. It just seems like writing software will necessarily require paying rent to the LLM hosts to keep up. I guess it's possible that we'll figure out a way to do local inference in a way that is accessible to everyone in the way that most other modern software tools are, but the high training costs make that seem unlikely to me.

I also worry that as we rely on LLMs more and more, we will stop producing the kind of tutorials and other content aimed at beginners that makes it so easy to pick up programming the manual way.


There's a Stephen Boyd quote that's something like "if your optimization problem is too computationally expensive, just go on vacation to Greece for a few weeks and by the time you get back, computers might be fast enough to solve it." With LLMs there's sort of an equivalent situation with cost: how mindblowing would it be able to train this kind of LLM at all even just 4 years ago? And today you can get a kindergartener level chat model for about $100. Not hard to imagine the same model costing $10 of compute in a few years.

There's also a reasonable way to "leapfrog" the training cost with a pre-trained model. So if you were doing nanochat as a learning exercise and had no money, the idea would be to code it up, run one or two very slow gradient descent iterations on your slow machine to make sure it is working, then download a pre-trained version from someone who could spare the compute.


But in this case the reason is simple: the core algorithm is O(n^2), this not going to be improved over a few weeks.


> today you can get a kindergartener level chat model for about $100. Not hard to imagine the same model costing $10 of compute in a few years.

No, it's extremely hard to imagine since I used one of Karpathy's own models to have a basic chat bot like six years ago. Yes, it spoke nonsense; so did my GPT-2 fine tune four years ago and so does this.

And so does ChatGPT

Improvement is linear at best. I still think it's actually a log curve and GPT3 was the peak of the "fun" part of the curve. The only evidence I've seen otherwise is bullshit benchmarks, "agents" that increase performance 2x by increasing token usage 100x, and excited salesmen proclaiming the imminence of AGI


Apparently 800 million weekly users are finding ChatGPT useful in its present state.


1. According to who? Open AI? 2. Its current state is "basically free and containing no ads". I don't think this will remain true given that, as far as I know, the product is very much not making money.


Yes, that number is according to OpenAI. They released that 800m number at DevDay last week.

The most recent leaked annualized revenue rate was $12bn/year. They're spending a lot more than that but convincing customers to hand over $12bn is still a very strong indicator of demand. https://www.theinformation.com/articles/openai-hits-12-billi...


Part of that comes from Microsoft API deals. Part of that will most certainly come because the vast network of companies buy subscriptions to help "Open" "AI" [1].

Given the rest of circular deals, I'd also scrutinize if it applies to the revenue. The entanglement with the Microsoft investments and the fact that "Open" "AI" is a private company makes that difficult to research.

[1] In a U.S. startup, I went through three CEOs and three HR apps, which mysteriously had to change for no reason but to accommodate the new CEO's friends and their startups.


When people take the time to virtue signal "Open" "AI" or the annoyingly common M$ on here I wonder often why they are wasting their precious time on earth doing that


It is in my style guide. I see that you optimized your time by omitting the full stop at the end of the sentence.


Totally fine, you're not alone. The kids call it a "typing quirk" as a justification if you're interested in learning more.


Even with linear progression of model capability, the curve for model usefulness could be exponential, especially if we consider model cost which will come down.

For every little bit a model a smarter and more accurate there are exponentially more real world tasks it could be used for.


This. It looks like one of the keys to maintaining open source is to ensure OSS developers have access to capable models. In the best of worlds, LLM vendors would recognize that open source software is the commons that feeds their models and ensure it flourishes.

In the real world...


Maybe this isn't possible for LLMs yet, but open source versions of AlphaZero have been trained on peer-to-peer networks.

https://zero.sjeng.org/

https://katagotraining.org/


(This is a bit ranty, but due to a sincere desire for a better world, and being the recipient of personal attacks for believing a better world is achievable by a different path to others)

I feel like this point of view is an ideal not shared by one of the main branches of anti-AI sentiment.

The idea of intellectual property works against this. Rather than contributing to humanity directly, ownership of information is accumulated by individuals and then rented to humanity.

At the same time I agree that people should be able to have a livelihood that affords them the ability to create new intellectual contributions.

The service Karpathy is providing is also being provided by thousands of YouTube creators in a huge variety of topics. It's a little sad that so many must support their efforts with support their efforts with sponsorships from sources with varying degrees of ethical behaviour. Patreon is better but still not ideal. I sincerely believe this _is_ one of the best ways to contribute to society.

A recent Daily Show had Jon Stewart describe training AI as strip mining human knowledge. Training AI is regularly described as theft as if this position is a given without any counter argument possible. It is opinion masquerading as fact. This saddens me because it suggests to me that the war to control the narrative is being won by people who want to entrench a hypercapitalistic vision of ownership where not only is a particular expression of an idea ownable but also stakes a claim to own some of any ideas that come from viewing that expression.

I cannot see any way that this viewpoint would aid humanity as a whole, but instead assign benefits to a collection of individuals. The ability to trade intellectual property means that ownership inevitably gets passed to a smaller and smaller pool of individuals over time.

I think we really do need a new way to consider these issues in light of the modern world. When mentioning these thoughts to others a common refrain is that it doesn't matter because the powers that be (and their lobbyists) will prevent any fix from happening. I have never been fond of that particular fatalism, especially when it inhibits discussion of what would be better.


Awesome approach.

I'm all for abolishing IP if all AIs are owned communally. I.e. ideally they're utilities or flat out co-ops like some Spanish businesses.

https://en.wikipedia.org/wiki/Mondragon_Corporation

Consum (Spanish supermarket).

They don't get to use everything communally and then capitalism their way forward.


I recommend his ANN/LLM from scratch videos to people a lot because not only is he a clear instructor, but his code tends to be very Pythonic and just the right balance of terse but readable (not counting the Pytorch vectorization stuff, but that's not his fault, it's just complex). So I think people benefit just from watching and imitating his code style.


Then a single person whose learned those skills decide to poison all of us thanks to the skills acquired.


If it only were so easy


strong +1 - developers like him are heros


As noble as the goal sounds, I think it's wrong.

Software is just a tool. Much like a hammer, a knife, or ammonium nitrate, it can be used for both good or bad.

I say this as someone who has spent almost 15 years writing software in my free time and publishing it as open source: building software and allowing anyone to use it does not automatically make other people's lives better.

A lot of my work has been used for bad purposes or what some people would consider bad purposes - cheating on tests, cheating in games, accessing personal information without permission, and in one case my work contributed to someone's doxxing. That's because as soon as you publish it, you lose control over it.

But at least with open source software, every person can use it to the same extent so if the majority of people are good, the result is likely to be more positive than negative.

With what is called AI today, only the largest corporations can afford to train the models which means they are controlled by people who have entirely different incentives from the general working population and many of whom have quite obvious antisocial personality traits.

At least 2 billion people live in dictatorships. AI has the potential to become a tool of mass surveillance and total oppression from which those countries will never recover because just like the models can detect a woman is pregnant before she knows it, it will detect a dissenter long before dissent turns into resistance.

I don't have high hopes for AI to be a force for good and teaching people how toy models work, as fun as it is, is not gonna change it.


"With what is called AI today, only the largest corporations can afford to train the models"

I take it you're very positive about Andrej's new project which allows anyone to train a model for a few hundred dollars which is comparable to the state-of-the-art from just 5 years ago then.


For a few hundred dollars, given heavily-VC-subsidized hardware that is probably partially funded by nvidia and various AI companies, etc.

Can I run it on my local hardware (nvidia consumer card, AMD cpu)? No. When could that corporation cut off my access to that hardware if I did anything it didn't like? Anytime.

Lots of things have started off cheap / subsidized to put competitors out of business, and then the prices go up, up and up..


> Can I run it on my local hardware?

Yes. The training process requires big expensive GPUs. The model it produces has 561M parameters, which should run on even a high end mobile phone (I run 4B models on my iPhone).


I would genuinely love to think otherwise. But I've seen and grown up seeing good things being used in stupid ways (not necessarily for malice)


> At least 2 billion people live in dictatorships. AI has the potential to become a tool of mass surveillance and total oppression from which those countries will never recover because just like the models can detect a woman is pregnant before she knows it, it will detect a dissenter long before dissent turns into resistance.

It already works like this in your precious western democracies and they didn't need AI to be authoritarian total surveillance states in spirit, with quite a lot of support from a propagandized populace that begged for or pretended to agree with the infringement of their civil rights because of terrorism, drugs, covid or protecting the poor poor children.

You can combat tech with legislation and culture but the legislation and culture were way beyond the tech in being extremely authoritian in the first place.


I don't know man. All this "tech" didn't see AOC, Sanders, and other 'radicals' coming. The parties actually had to expend effort after the fact to delegitimize them and have to continue to do so for additional candidates that come along(Jamal Bowman, Cori Bush, etc.)


I‘m afraid the technology will do more damage because many people will abuse it for fake news and misinformation.


Yeah it feels similar to inventing the nuke. Or it’s even more insidious because the harmful effects of the tech are not nearly as obvious or immediate as the good effects, so less restraint is applied. But also, similar to the nuke, once the knowledge on how to do it is out there, someone’s going to use it, which obligates everyone else to use it to keep up.


While documenting a build path is nice, IMHO renting hardware nobody can afford from VC-backed cloud providers using cold hard cash to produce clones of legacy tech using toy datasets under the guise of education is propping up the AI bubble and primarily helping institutional shareholders in those AI bubble companies, particularly their hardware supplier NVidia. Personally I do not see this as helping people or humanity.

This would sit better with me if the repo included a first tier use case for local execution, non-NVidia hardware reference, etc.


"This would sit better with me if the repo included a first tier use case for local execution, non-NVidia hardware reference, etc."

This is a pretty disheartening way to respond to something like this. Someone puts a great deal of effort into giving something interesting away for free, and is told "you should have also done THIS work for free as well in order for me to value your contribution".


It is an objective and transparent response based on free software world norms. Feel free to interpret differently and to be disheartened. Hell, many of us are disheartened by the AI VC political theater we are seeing right now: experienced programmers, artists, lawyers, perhaps much of humanity. Let's stick to objective elements of the discussion, not emotional opine.


If you can't afford $100 or learn how to train it locally with more time and less money, then this isn't something you should be focusing on at all.


It is amusing to note the dichotomy between the clearly compassionate, empathetic and altruistic perspective displayed here and the comically overstated framing of helping humanity.


(Shrug) Other sites beckon.


This is wholly unhelpful.


Sorry. Personally, as an HN user, I'd like to see more Karpathy and less... whatever this guy is rambling on about.


Tinkering with something is what inspires next generation of innovators, in this space or another.

Think back to your first experience with tech, something you just erenstly thought was cool...


I think you got your proportions slightly wrong there. This will be contributing as much to an AI bubble as a kid tinkering around with combustion is contribution to global warming.


Not really. Anything that guy does sets the tone for an extended cacophony of fans and followers. It would be a sad day when nobody critically assesses the motivations, effects and framing of those moves. I question the claim this move helps humanity and stand by the assessment it's just more feeding an unfree ecosystem which equates to propping up the bubble.


That certainly sounds very ominous.


He is the GOAT of LLM MVPs. That is educational and useful, especially because he uses a minimal and clean style, but I don't see how it even compares with kernels, operating systems etc.

So I appreciate his work in an academic and educational sense, but large scale applications with stolen training material are still theft.


I would adjust your formula to the:

number of people you help x how much you help them x number of people you harm x how much you harm them

For example - harming a little bit all content creators of the world, by stealing their work without compensation or permission. How much does that cost globally every year after year? How do we even quantify long term consequences of that? Stuff like that.


If you consider the cost of hiring a human professional to over using multimodal AI for something, its very realize literally thousands of dollars of value per chat.

Multiply that by many billions of chats per day.

Lawyers and other professionals charge a lot. So do artists, especially when you want to do a million revisions. LLMs hand it out for free, making many knowledge and art professions affordable and accessible to the masses.

Stable owners were upset when cars replaced horses, but you can't stop progress, especially when value proposition is undenyable.


I wonder what people will do, when they will realize that LLM lawyers produce insufficient results, but "suddenly" all cheap bottom rung lawyers are gone and switched professions.

As for the LLM "creative" content, have you seen it or read it? Well, same problem. After you will need a quality content, good luck finding some cheap creator. Pay full price for an experienced one and likely wait.

PS: I don't doubt that LLMs are here to stay. They will se a lot of usage and pervade all industries. It's just that future will be pretty shit. Talking on phone with LLMs, reading LLM slop, seeing LLM lop everywhere, receiving generated emails and using LLMs to reverse parse them to search for an actual content, major economy downturn, rapidly slowing salary growth (not that it was big before), etc.


I agree with this. I’d prefer to have Meta be the steward for React instead of Vercel because Meta does not have a conflict of interest.


They might not have the conflict of interest but they also don’t have the business interest either. Meta is a spyware company who makes all of their money from collecting personal data to sell to advertisers. They have zero incentive to dedicate any kind of significant resources to supporting millions of websites using their internal UI library.


I think op is telling a similar story, not necessarily the same person.


You are correct. I tried to rewrite my statement after I was being downvoted. English is not my first language to the surprise of most.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: