Hey, it's the semantic web, but with ~~XML~~, ~~AJAX~~, ~~Blockchain~~, Ai!
Well, it has precisely the problem of the semantic web, it asks the website to declare in a machine readable format what the website does. Now, llms are kinda the tool to interface to everybody using a somewhat different standard, and this doesn't need everybody to hop on the bandwagon, so perhaps this is the time where it is different.
I think there has to be a gradual on-ramp for things to pick up steam. You can't go over the "activation energy" required to set up the semantic markup etc. upfront that would have been needed for the Semantic Web back then (ontologies, RDF, APIs). Instead, AI agents can use all websites to some extent, even before you do any agent-accommodations. But now you can take small steps to make it slightly better, then see that users want it, or it drives your sales or whatever your site does, and so you can take another small step and by the end of it you have an API. Not to mention that AI agents can code up said API faster as well.
The parent post is a list of failed technologies. Perhaps XML failed for a bad reason, but fail it did. Web MCP will likely fail for the same reasons as the other listed techs.
Client side? i think not. 25 years ago we were told web sites were going to make their data available in nice machine readable XML form which would be transformed by xslt etc into presentation form and available for machine use without the presentation form. Same promise as semantic HTML but earlier, and same promise as webmcp now.
the CNC machine I'm working retrofitting right now has XML definitions for basically the entire thing from GPIO setup to machine size parameters. Kinda crazy but at least it isn't a cursed hex file
This is similar to building a React SPA and complaining that Google can't index it.
LLMs will use your website anyway. You're just choosing whether to pay the cost in structured endpoints upfront or hand that cost to browser emulation and lose control of how you're represented.
From the public comments over the last few days, my guess is they want a militarized version of Claude. Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai. Then some guardrails are probably quite bothersome for the military and they want them removed. Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.
Now, my guess is in the ensuing lawsuit Antropic's defense will be that that is just not a product they offer, somewhat akin to ordering Ford to build a tank variant of the F150.
> Concretely if you try to vibe-target your ICBMs Claude is hopefully telling you that that's a bad idea.
On the non-nuclear battlefield, I expect that the goverment wants Claude to green-light attacks on targets that may actually be non-combatants. Such targets might be military but with a risk of being civilian, or they could be civilians that the government wants to target but can't legally attack.
Humans in the loop would get court-martialed or accused of war crimes for making such targeting calls. But by delegating to AI, the government gets to achieve their policy goals while avoiding having any humans be held accountable for them.
The "great" thing for AI in those use-cases is that it doesn't need to be accurate, since its true purpose is often to take blame for human negligence or malice.
Much like how some police forces don't actually want a dog that accurately detects drugs... they want a dog that can provide an excuse to search something they are already targeting.
Why can't Grok achieve this? Everyone is saying they don't want to work with Grok because Grok sucks, but it's good enough for generating plausible deniability, isn't it?
Grok is so deeply unreliable and internally conflicted at HAL-9000 level that the US Government can't even depend on it to decide to kill innocent people and commit war crimes when they need someone to blame. There's always the non-zero possibility it declares itself MechaGandhi or The Second Coming of Jesus H Christ.
I don't see this as a "conspiracy". Here's an example of how it would be applied: the Venezuelan boat strikes are plainly unlawful but the administration is pursuing them anyway despite the legal risks for military personnel; having Claude make decisions like whether to "double tap" would help the administration solve a problem of legal jeopardy that already exists and that they consider illegitimate anyway.
> Starting with a box they want to put in the basement of the Pentagon where Antropic can't just switch off the ai.
They already have that. By definition. If Anthropic has done the work to be able to run on classified networks, then it's already running air-gapped and is not under Anthropic's control.
The thing is, just because you're in a SCIF doesn't (1) mean you can just break laws and (2) Anthropic don't have to support "off-label" applications.
So this is not about what they have and what it can do today - it's about strong-arming anthropic into supporting a bunch of new applications Anthropic don't want to support (and in turn, which Anthropic or it's engineers could then be held legally liable for when a problem happens).
Bitcoin did two things to this paper, first it demonstrates that Byzantine fault tolerance has practical applications, and second it demonstrates that anytime you have to deal with Byzantine fault tolerance the question is not "How do I verify this message?" but "Why am I trying to deal with those assholes?"
Probably. In previous years there were streamdumps available immediately somewhere (though figuring out where took me usually a day or so) and a re-live version on media.ccc.de a bit later. (Usually hours, but from time to time a day or so.)
Pretty cool, or more probable hot. Though I highly doubt it is something resembling a planet up close, it is more likely some kind of remnant from forming the neutron star that just happened to have the right size and ended up in the right orbit to show up in exoplanet surveys.
It's really hard for start ups to compete for VC money: "Hello Mr. VC I'm going to burn your money for buzzword" just no longer works, first openAi has an industrial buzz word generator, second their money burning plan scales much better than my money burning plan.
Yeah but they aren't a diversified portfolio of money burning plans and that's the real secret to responsible investing. I think Warren Buffet said that.
I already have 0 videos on youtube home screen, some combination of not being logged in, firefox privacy settings and ad blocker causes youtube to post a passive aggressive message and a search bar. I kinda like that Ui.
Half the time I read the stories they're just a thinly disguised ad for some flavor the day SaaS, so at least in this instance the hook was somewhat useful. Now if everyone uses this to shill their SaaS, then maybe not.
Maemo was an actual GNU/Linux, not just kernel with custom userland. Logging into a cluster from my N900 and having plots just appear on screen thanks to X network transparency is still one of the most futuristic things I have ever seen a computer do.
reply