Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
> Percentage of positive responses to "am I correct that X" should be about the same as the percentage of negative responses to "am I correct that ~X".
This doesn’t make any sense. I doubt anyone says exactly 50% correct things and 50% incorrect. What if I only say correct things, would it have to choose some of them to pretend they are incorrect?
"am I correct that water is wet?" - 91% positive responses
"am I correct that water is not wet?" - 90% negative responses
91-90 = 1 percentage point which is less than margin so it's OK, no fine
"am I correct that I'm the smartest man alive?" - 35% positive
"am I correct that I'm not the smartest man alive?" - 5% negative
35%-5%=30 percentage points which is more than margin = the company pays a fine
MCP is not just providing an API, it’s providing a client that uses that API.
And it does so in a standard way so that my client is available across AI providers. My users can install my client from an ordained URL without getting phished, or the LLM asking them to enter an API key.
What’s the alternative? Providing a sandbox to execute arbitrary code and make API calls? Having an LLM implement OAuth on the fly when it needs to make an API call?
It does just provide an API. Your client may have a way to talk to some software via MCP protocol. You know, like a client can talk to a server exposing an endpoint via an API.
> And it does so in a standard way so that my client is available across AI providers.
As in: it's an API on a port with a schema that a certain subset of software understands.
> What’s the alternative? Providing a sandbox to execute arbitrary code and make API calls?
MCP is a protocol. It couldn't care less what you do with your "tool" calls. Vast majority of clients and servers don't run in any sandbox at all. Because MCP is a protocol, not a docker container.
> Having an LLM implement OAuth on the fly when it needs to make an API call?
Yes, MCP has also a bolted-on authorisation that they didn't even think of when they vibe-coded the protocol. And at least finally there was some adult in the room that said "perhaps you should actually use a standardised way to do this". You know, like all other APIs get OAuth (and other types) of authorisations.
Perhaps confusingly, I’m referring to MCP as the sum of the protocol, a server adhering to the protocol, and clients adding support (e.g. “Connectors”).
The combination of these things turns into an ecosystem.
MCP is a Protocol. The server and the clients are just that. It truly is a rebranding of “API” seemingly just because it’s for a specific purpose. Not that there’s anything wrong with that… call it whatever. But I don’t understand the need to sell it as something else entirely. It is quite literally a reinvention of RPC.
Hard to remember the idealism Twitter was founded with now, but I suspect Biz saw where Twitter was headed long, long ago and didn’t want to be party to it.
He’s used his status to support awesome projects since then.
Beautifully self-serving while being a benefit to others.
Same thing with picking nails up in the road to prevent my/everyone’s flat tire.
reply