I do vibe code in C; I'm not a C programmer and I certainly couldn't do a security audit of any serious C codebase, but I can read and understand a simple C program, and debug and refactor it (as long as it's still quite simple).
And it's super fun! Being able to compile a little C utility that lives in the Windows tray and has a menu, etc. is exhilarating.
But I couldn't do that in assembly; I would just stare at instructions and not understand anything. So, yes for C, no for assembly.
In this analogy, horses are jobs, not humans; you could argue there's not much of a difference between the two, because people without jobs will starve, etc., but still, they're not the same.
Why make the analogy at all if not for the implied slaughter. It is a visceral reminder of our own brutal history. Of what humans do given the right set of circumstances.
One would argue in a capitalist society like ours, fucking with someone's job at industrial scale isn't awfully dissimilar from threatening their life, it's just less direct. Plenty more people currently are feeling the effects of worsening job markets than have been involved in a hostage situation, but the negative end results are still the same.
One would argue also if you don't see this, it's because you'd prefer not to.
If we had at least a somewhat functioning safety net, or UBI, or both, you'd at least have an argument to be made, but we don't. AI and it's associated companies' business model is, if not killing people, certainly attempting to make lots of lives worse at scale. I wouldn't work for one for all the money in the world.
UBI will not save you from economic irrelevance. The only difference between you and someone starving in a 3rd world slum is economic opportunity and the means to exchange what you have for what someone else needs. UBI is inflation in a wig and dark glasses.
There is, at least, a way to avoid people without jobs starving. Whether or not we'll do it is anyone's guess. I think I'll live to see UBI but I am perphaps an optimist.
You'd have to time something like UBI with us actually being able to replace the workforce -- The current LLM parlor tricks are simply not what they're sold to be, and if we rely on them too early we (humanity) is very much screwed.
The parent isn't arguing you're not getting a good value out of the product. It says that users' contributions don't cover production costs, which may or may not be true but doesn't have much to do with how much value they get from it.
The general problem is the US are a bully and Europe just caves, always. We should put up a serious fight. Block all US imports, starting with tech, and see what happens. Who cares if we sell less champagne??!?
It’s not about champagne. It’s about us not making anything like the Patriot air defense system. Or us not having the capabilities to command our disparate militaries cohesively without US involvement in NATO. The whole Western order has been built on the premise of US being the corner stone that ties everything together.
Thank God the French have always been suspicious about it since the Suez crisis, hence we _do_ have at least some independent capabilities.
For those who don't know, the French (and British) instigated the Suez crisis. It was a highly illegal attempt at regime change in Egypt and the US along with the USSR and United Nations rightfully pressured the French to stop. Bizarre example to illustrate the need for military independence.
Unfortunately your assessment is based on the faulty premise that anyone in international politics does anything to be nice.
The US doesn't give one rats ass about Egypt. The US won and got their way in Suez and the international seas in general. Europe lost.
There is no right in geo politics - only might. It's completely machiavellian. This is because you don't get to elect your neighbors leaders, and so they aren't beholden to you. International politics fundamentally doesn't work like national politics because of this. You can't stop Putin, Trump, or Xi, from taking what is yours unless you have the steel and oil to stop them. You can't sue them or vote them out like in national politics.
The problem with your perspective is that citizens can still tell right from wrong. And the public is much less Machiavellian than those in charge. The people can change how their leaders act, but won't when they believe any attempt to steer towards pro-social geopolitics is pointless.
I should also point out that some countries are much more bellicose than others, in direct contradiction with your nihilist view.
I absolutely do not encourage anything bellicose. I'm saying you are not good for not defending yourself. Everyone needs to defend their access through the Suez.
The US is underwriting European security (and by extension various European welfare states).
Do you really want to block the import of arms and financial aid to Ukraine?
If Europeans were serious about their sovereignty they’d have made very different choices up until now.
It isn’t right that America has so much power in this circumstance, but going back decades the US has been asking for Europe to take defense seriously.
> It isn’t right that America has so much power in this circumstance, but going back decades the US has been asking for Europe to take defense seriously.
Funny because the last time I believe that it was the US that requested help in Iraq and Afghanistan and not the other way around.
Europe should certainly increase its defense spending (and actual capabilities). But the reason NATO exists isn't just to please Europe. The US have a direct interest in containing Russia; I don't think they can afford to simply stop caring about the rest of the world. And I'd be willing to test that theory.
> I don't think they can afford to simply stop caring about the rest of the world.
It seems that the policy of the current US government is to split the world between themselves, Russia and China. And I guess that's a legitimate policy, even though I think it's both impossible and incredibly misguided.
> Do you really want to block the import of arms and financial aid to Ukraine?
Umm... yes? Since this whole debacle started, the EU has been shooting itself in the foot with all the sanctions that hurts its industries.
On the other hand, the US did the smart thing and did not give out weapons for free, it charged for them.
In the end, the US will be the winner of this war and Europe will come out of it incredibly weak economically. And it will have to turn to the US for help. Again.
I disagree, and clearly most companies opting for some kind of RTO are on my side.
The biggest benefit of an office is collocation. People need to be forced to come to an office or they won’t do it, and team efficiency will go down.
Even if you think you’re performing well, the entire team suffers for it. Miscommunication happens. People get blocked for longer. Juniors can’t get the mentoring they need.
If you disagree that’s fine, go work for a remote company. But clearly the tide is turning against you with more and more companies enforcing RTOs.
The people that do the actual work dont want to be in the office surrounded by the slackers that for some reason 'require' other people around them to work. Theyre leaches. They drain energy me.
Then they are also not responsible enough to work at the office, you can't pay a nanny who sith with them and tells them to keep working 8 hours a day at the office anyway. Those people need to be let go because you can't trust them.
Actually, having people at the office often works like peer pressure in that people at least pretend to work around their co-workers. Something which doesn't exist at home.
A lot of people who prefer remote work have a superiority complex over their peers. They’re usually hard to work with and unreliable, and think that as long as they’re performing their individual tasks they’re allowed to be awful communicators.
I like Google Search for simple searches and still use it all the time. But for "complex" searches that are more like research, ChatGPT is actually pretty good, and provides actual, working links whereas Gemini seems to hallucinate more (in my experience).
openrouter.ai does exactly that, and it lets you use models from OpenAI as well. I switch models often using openrouter.
But, talk to any (or almost any) non-developer and you'll find they 1/ mostly only use ChatGPT, sometimes only know of ChatGPT and have never heard of any other solution, and 2/ in the rare case they did switch to something else, they don't want to go back, they're gone for good.
Each provider has a moat that is its number of daily users; and although it's a little annoying to admit, OpenAI has the biggest moat of them all.
Non developers using Chatbots and being willing to pay is never going to be as big as the enterprise market or BigTech using AI in the background.
I would think that Gemini (the model) will add profit to Google way before OpenAI ever becomes profitable as they leverage it within their business.
Why would I pay for openrouter.ai and add another dependency? If I’m just using Amazon Bedrock hosted models, I can just use the AWS SDK and change the request format slightly based on the model family and abstract that into my library.
You don't need openrouter if you already have everything set up in your own AWS environment. But if you don't, openrouter is extremely straightforward, just open an account and you're done.
All google needs to do is bite the bullet on the cost and flip core search to AI and immediately dominate the user count. They can start by focusing first on questions that get asked in Google search. Boom
Inference is cash positive: it's research that takes up all the money. So, if you can get ahold of enough users, the volume eventually works in your favour.
The idea is war of attrition and then as your potential competitors run out of money and it costs too much for a new entrant, you raise your prices to be profitable and/or enshittify your product.
Right, but unlike with social products (where the network of users is essential) or transportation/food delivery (where providers will follow the user volume) I just don’t see any stickiness benefit for OpenAI. A user’s conversation history is the only potentially valuable bit, but I think most users treat their ChatGPT history like their Google search history; disposable.
When I said it was “trivial” to write a library, I should have been more honest. “It’s trivial to point ChatGPT to the documentation and have it one shot creating a Python library for the models you want to support”.
The problem with ads in AI products is, can they be blocked effectively?
If there are ads on a side bar, related or not to what the user is searching for, any adblock will be able to deal with them (uBlock is still the best, by far).
But if "ads" are woven into the responses in a manner that could be more or less subtle, sometimes not even quoting a brand directly, but just setting the context, etc., this could become very difficult.
I realized that ads within context were going to be an issue a while ago so to combat this i started building my own solution for this which spiraled in to a local based agentic system with a different bigger goal then the simple original... Anyways, the issue you are describing is not that difficult to overcome. You simply set a local llm model layer before the cloud based providers. Everything goes in and out through this "firewall". The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information. I've tested exactly this interaction and it works just fine. i think these types of systems will be the future of "ad block" . As people start using agentic systems more and more in their daily lives it will become crucial that they pipe all of the inputs and outputs through a local layer that has that humans best interests in mind. That's why my personal project expanded in to a local agentic orchestrator layer instead of a simple "firewall". i think agentic systems using other agentic systems are the future.
> The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information
This seems impossible to me.
Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of:
"Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind"
If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner".
How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply?
You are correct that you cant change the content if its already biased. But you can catch it with your local llm and have that local llm take action from there. for one you wouldnt be instructing your local model to ask comparison questions of products or any bias related queries like politics etc.. of other closed source cloud based models. such questions would be relegated for your local model to handle on its own. but other questions not related to such matters can be outsourced to such models. for example complex reasoning questions, planning, coding, and other related matters best done with smarter larger models. your human facing local agent will do the automatic routing for you and make sure and scrub any obvious ad related stuff that doesnt pertain to the question at hand. for example recipy to a apple pie. if closed source model says use publix brand flower and clean up the mess afterwards with clenex, the local model would scrub that and just say the recipe. no matter how you slice and dice it IMo its always best to have a human facing agent as the source of input and output, and the human should never directly talk to any closed source models as that inundates the human with too much spam. mind you this is futureproofing, currently we dont have much ai spam, but its coming and an AI adblock of sorts will be needed, and that adblock is your shield local agent that has your best interests in mind. it will also make sure you stay private by automatically redacting personal infor when appropriate, etc... sky is the limit basically.
I still do not think what you're saying is possible. The router can't possibly know if a query will result in ads, can it?
Your examples of things that won't have ads, "complex reasoning, planning, coding", all sound perfectly possible to have ads in them.
For example, perhaps I ask the coding task of "Please implement a new function to securely hash passwords", how can my local model know whether the result using boringSSL is there because google paid them a little money, or because it's the best option? How do I know when I ask it to "Generate a new cloud function using cloudflare, AWS lambda, or GCP, whichever is best" that it picking Cloudflare Workers is based on training data, and not influenced by advertising spend by cloudflare?
I just can't figure out how to read what you're saying in any reasonable way, like the original comment in this thread is "what if the ads are incorporated subtly in the text response", and your responses so far seem so wildly off the mark of what I'm worried about that it seems we're not able to engage.
And also, your ending of "the sky's the limit" combined with your other responses makes it sound so much like you're building and trying to sell some snake-oil that it triggers a strong negative gut response.
But don't you need some kind of AI to filter out the replies? And if you do, isn't it simpler to just use a local model for everything, instead of having a local AI proxy?
The local llm is the filter so yes you need one. and its not simpler to have the local llm do everything because the local llm has a lot of limitations like speed, intelligence and other issues. the smart thing to do is delegate all of the personal stuff to the local model, and have it delegate the rest to smarter and faster models and simply parrot back to you what they found. this also has the benefit of saving on context among many other advantages.
how much it cost me? well i been thinking about it for a long time now, probably 9 months. bought myself claude code and started working on some prototypes and other projects like building my own speech to text and other goodies like automated benchmarking solutions to familiarize myself with fundamentals. but finally started the building process about 2 months ago and all it cost me was a boatload of time and about 50 bucks a month in agentic coding subscriptions. but it hasnt been a simple filter for a long time now. now its a very complex agentic harness system. lots of very advanced features that allow for tool calling, agent to agent interaction, and many other goodies
But does "formally verified code" really go in the same bag as "normalized database" and ensuring data integrity at the database level? The former is immensely complex and difficult; the other two are more like sound engineering principles?
I do vibe code in C; I'm not a C programmer and I certainly couldn't do a security audit of any serious C codebase, but I can read and understand a simple C program, and debug and refactor it (as long as it's still quite simple).
And it's super fun! Being able to compile a little C utility that lives in the Windows tray and has a menu, etc. is exhilarating.
But I couldn't do that in assembly; I would just stare at instructions and not understand anything. So, yes for C, no for assembly.
reply