...and (again), hello world does not work [1]. the ai slop pr [2] absolutely butchers the fix. anyone foolish enough to switch to this is in for a rough time. details matter!
Exactly. As such, people in the thread with huge dbs have a poor UX when they really do not need to. Also, people who have experienced corruption issues on network storage due to the default saving method (I personally have never experienced this).
the tests are validating the plugin by executing actual builds in an isolated/temporary gradle project. debugging doesn't work out of the box because it's another process
there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate
maybe we don't have AGI to prevent all bugs. but surely some of these could have been caught with some good old fashioned elbow grease and code review.
successful how? the only metric i see is # of pull requests which means nothing. hell, $dayJob has hundreds of PRs generated weekly from renovate, i18n integrations, etc. with no LLM in the mix!
The only reason for a source code to be is for humans to read it, bun when the source code gets churned (by AI agents) in too large of a quantity for any human to realistically read and analyze, then what's the point of having a source code in the first place? Generating binary directly simply makes sense. Working with binary does, even when a human is involved, as long as there's an AI helper as well. The human simply can ask the AI assistant to explain whatever logical aspects behind the binary code and instruct the AI agent to modify the binary code directly, if necessary. That may be scary and not easy to accept. Going further with this idea, even the written text may become "too costly to work with" when there will be an AI agent to verbally or graphically serve the human with whatever informational aspect of a given text that could be of interest in a given situation.
LLMs are trained on source code, so that's what they can (barely) write. Decompiling is a -lossy- action which means that training directly on the output would have much less information and would be a nightmare if one (human or llm) needs to debug.
this is likely an ecosystem sort of thing. if your language gives you the tools to do so at no cost (memory/performance) then folks will naturally utilize those features and it will eventually become idiomatic code. kotlin value classes are exactly this and they are everywhere: https://kotlinlang.org/docs/inline-classes.html
Haxe has a really elegant solution to this in the form of Abstracts[0][1]. I wonder why this particular feature never became popular in other languages, at least to my knowledge.
the incident has now expanded to include webhooks, git operations, actions, general page load + API requests, issues, and pull requests. they're effectively down hard.
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
[1] https://github.com/cloudflare/vinext/issues/22
[2] https://github.com/cloudflare/vinext/pull/31/changes#r284987...
reply