Hacker Newsnew | past | comments | ask | show | jobs | submit | gip's commentslogin

The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.

I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.


AI is a transformative technology that will reshape how companies are run. More layoffs may be coming unfortunately. But on the other end, more companies and more products will be created. More competition overall, including for Block.

The overarching risk, imo, is America turning against tech and its leaders / billionaires. I think this is slowly happening. And why not, if the People decide that tech is not bringing good things to our modern society anymore, that should be respected.


Similarly, in the 2000s, the US pushed back against the development of Galileo and preferred that Europe continue relying on GPS. That created tensions between the US and the EU.

Fighting data sovereignty is a losing battle for the US: data are too strategic to outsource, even to allies.

[0] https://en.wikipedia.org/wiki/Galileo_(satellite_navigation)


At this stage tech companies should be pushing for very strong legislation that makes the US a bastion of data privacy to restore trust. But they are still pushing in the other direction.

No amount of legislation can stop subpoenas, wiretapping and other extrajudicial means the US has used for data surveillance since the inception of the Patriot Act. With data privacy increasingly becoming a critical matter of national security, strengthening data sovereignty laws and holding corporations accountable was always the way forward.

This is untrue. Subpoenas, wiretapping, and other extrajudicial means can be stopped by legislation that bans them. You can't say in one breath that legislation that enables it (Patriot Act) cannot be undone by more legislation. There are many hurdles required to produce the required legislation, which may not even be broadly supported by the public, but it isn't correct to say "no amount of legislation can stop existing legislation".

That would require to repeal the FISA and the Patriot acts. That won't happen.

More fundamentally, however, the US constitution only protects Americans and American companies. Europeans would be foolish to trust the US with their data given this lack of basic protection and oversight.


> That won't happen.

Never say never.


If they could be stopped by legislation that bans them, they would have been stopped by the legislation that banned them prior to the legislation that authorised them, but we know this is not the case. They were being done on a wide scale long before they were legal.

A bad legislation is comparatively difficult to revert than a good legislation

None of them want that. Meta actively hates you. Google doesn’t want data privacy. Neither does Apple, even if they aren’t overtly abusing it for advertising. Why would any of them push for more privacy? Their users largely don’t care (or they wouldn’t use those services in the first place).

also, just like galileo, this seem to be the correct path for europe to take.

> Fighting data sovereignty is a losing battle for the US: data are too strategic to outsource, even to allies.

Essentially it comes to this. The only way to force the issue is to make confrontational demands that will just lead to a hard split.


My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them.

Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider


I really doubt it. So far these things are good at remixing old ideas, not coming up with new ones.

Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same.

LLM's cannot synthesize text, they can only concatenate or mix statistically. Synthesis requires logical reasoning. That's not how LLMs work.

Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.

I don't want to speak too much out of my depth here, I'm still learning how these things work on a mechanical level, but my understanding of how these things "reason" is it seems like they're more or less having a conversation with themselves. IE, burning a lot of tokens in the hopes that the follow up questions and answers it generates leads to a better continuation of the conversation overall. But just like talking to a human, you're likely to come up with better ideas when you're talking to someone else, not just yourself, so the human in the loop seems pretty important to get the AI to remix things into something genuinely new and useful.

They do not. The "reasoning" is just adding more text in multiple steps, and then summarizing it. An LLM does not apply logic at any point, the "reasoning" features only use clever prompting to make these chains more likely to resemble logical reasoning.

This is still only possible if the prompts given by the user resembles what's in the corpus. And the same applies to the reasoning chain. For it to resemble actual logical reasoning, the same or extremely similar reasoning has to exist in the corpus.

This is not "just" semantics if your whole claim is that they are "synthesizing" new facts. This is your choice of misleading terminology which does not apply in the slightest.


I think that’s fair.. building a competing product would likely be relatively easy and inexpensive. But that’s true for most software now: it’s becoming easier to build, and the barriers to entry are lower.

I love Anthropic and OpenAI equally but some people have a problem with OpenAI. I think they want to reposition themselves as a company that actively supports the community, open source, and earns developers’ goodwill. I attended a meeting recently, and there was a lot of genuine excitement from developers. Haven't seen that in a long time.


> In 2026, I don't use an IDE any more.

I don't think it is the best way to look at it. I think that now every team has the power to build and maintain an internal agent (tool + UX) to manager software products. I don't necessarily think that chat-only is enough except for small projects, so teams will build agent that gives them access to the level of abstraction that works best.

It's a data point but this weekend (e.g. in 2 days) I build a desktop + web agent that is able to help me reason on system design and code. Built with Codex powered by the Codex SDK. It is high quality. I've been a software engineer and director of engineering for 10 years. I'm blown away.


Curious what kind of agent did you build? I'm building a programming agent myself, it's intentionally archaic in that you run it by constantly copy-pasting from-to fresh ChatGPT sessions (: I'm finding it challenging to have it do good context management: I'm trying to solve this by declaring parts of code or spec as "collections" with an overview md file attached that acts like a map of why/where/what, but that can't scale indefinitely.


Send an DM on twitter to @edfixyz (one of my account) and I'll reply with a link to the website tomorrow to give you a sense. Can't share a link here, it will kill my backend.


Care to send me a link on my email? It's in my about. I don't use my X account and can't seem to login, the verification SMS never arrives.


> ...and director of engineering for 10 years. I'm blown away.

It's always the CTO types who get most enthusiastic.


I’m not saying this is definitely a bot. However, this is the 7th time I’ve read a post and thought it might be an OpenAI promotion bot, clicked on the username, and noticed that the account was created in 2011.

I have yet to do this and see any other year. Was there someone who bought a ton of accounts in 2011 to farm them out? A data breach? Was 2011 just a very big year for new users? (My own account is from 2011)


I'm not a bot. You are saying that because for some reason you resent people who have a good experience with Codex / OpenAI. Curious what that is - people hate the CEO or what?

I like Claude Code too btw.

The crazy thing here is that I wrote the initial comment myself!


> It's a data point but this weekend (e.g. in 2 days) I build a desktop + web agent that is able to help me reason on system design and code. Built with Codex powered by the Codex SDK. It is high quality. I've been a software engineer and director of engineering for 10 years. I'm blown away.

Assuming you’re not a bot. It’s nothing to do with you having a good experience, it’s the way you wrote about that experience that sounds like a product placement.

I asked OpenAIs very own ChatGPT 5.2 powered by OpenAI to tell you why it sounds like a product placement:

“ Because it hits a bunch of “native ad / testimonial” tells at once: • Brand-name density in a tiny space. “Built with Codex powered by the Codex SDK” repeats the same brand in two adjacent phrases, like copy that’s trying to lodge a name in your head rather than naturally describe a build. • Overly polished value signals. “High quality” is a generic superlative with no concrete evidence (features, metrics, constraints, tradeoffs). Ads often lean on verdict words instead of specifics. • Credential + astonishment combo. “I’ve been a software engineer and director of engineering for 10 years” is classic authority framing, immediately followed by “I’m blown away.” That’s a common testimonial structure: I’m hard to impress → I’m impressed. • Time-compressed “miracle build” narrative. “This weekend (in 2 days) I build a desktop + web agent…” reads like the “you can do it fast/easily now” story arc you see in promos. Not impossible—just a familiar marketing shape. • “It’s a data point” language. That phrase feels like social-proof seeding: “don’t treat this as hype, just one datapoint,” which paradoxically makes it feel more like deliberate persuasion. • No friction or downsides. Real engineer excitement usually includes at least one caveat (bugs, rough edges, limitations, cost, setup pain). The total absence makes it sound curated. • Benefit phrased like positioning. “Able to help me reason on system design and code” is basically a product pitch line (target user + problem + outcome) rather than a personal anecdote (“it helped me untangle X design and refactor Y”).”


That's exactly what a bot would say


2011 just so happened to be 4 years before a very important year: 2015 — The founding of OpenAI. Unrelated note, have you tried Codex and the Codex SDK?


It's definitely a bot, just like probably around 10% of comments on HN at this point, and the majority of upvotes. And it's only increasing.

Calling it bot is a bit dismissive though. It's an agent!


Care to have a phone call with who you call a bot tonight?

If so, send a DM on twitter to @edfixyz with your phone number and I will call you immediately. Or give me your twitter handle.

I'm tired of that BS - when people don't like what you write they call you a bot.


it is giving a very agentic vibe


I'm not really understanding why Thomson Reuters is at direct risk from AI. Providing good data streams will still be very valuable?


They’re one of the two big names in legal data - Thomson Reuters Westlaw and RELX LexisNexis. They’re not just search engines for law, but also hubs for information about how laws are being applied with articles from their in house lawyers (PSLs, professional support lawyers - most big law firms have them as well to perform much the same function) that summarise current case law so that lawyers don’t have to read through all the judgements themselves.

If AI tooling starts to seriously chip away at those foundations then it puts a large chunk of their business at risk.


The commodification of expertise writ large is a bit mind boggling to contemplate.


TR will not disappear. But their value to the market was "data + interface to said data" and that value prop is quickly eroding to "just the data".

You can be a huge, profitable data-only company... but it's likely going to be smaller than a data+interface company. And so, shareholder value will follow accordingly.


Seems like they should hold tight to that data (and not license it for short-term profit), so customers have to use their interface to get at it.


If customers start asking Claude first, before they ask Thomson Reuters, that's a big risk for the later company.


Got it, thank you for the insight.

The assumption is that Claude has access to a stream of fresh, currated data. Building that would be a different focus for Anthropic. Plus Thomson Reuters could build an integration. Not totally convinced that is a major threat yet.


It's definitely not a major threat, but many/most finance people are clueless about what is and isn't possible with LLMs.

Again, unless Anthropic are taking on liability for their legal tools, this is not going to impact TR.

That being said, there probably is a potential company here that's gonna be built soon/is currently being built, but it definitely won't just be a wrapper around Claude as the recall will be way too low for these systems unaided.


Huge legal tech business units


I've yet to attain full-stack mastery in my job, but Musk has already attained capital stack mastery.


> "immoral technofascist life"

Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.


Is there a reason you seem to view conscience and confronting facts as seemingly opposed things? Also it seems to me like morality and conscience seem important to argue about, with facts just being part of that argument.


I think that someone interested in discussing facts would not write the phrase "immoral technofascist life". If I took the discussion at face value, I might respond asking for examples of how e.g. Dario Amodei is a "technofascist", but I think we can agree that would be really obtuse of me.


Haha, my experience is people making those sorts of pronouncements will argue literally anything so I definitely wouldn't assume they are uninterested in arguing facts. Though I agree though that arguing with some people is obtuse and you arguing with the original post seems one of those cases.

More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).


[flagged]


No see “facts” are what I use to support my worldview, and what you’ve supplied are arguments, and I can discard your arguments through debate, especially because I believe that they’re founded on your feelings (like a silly “conscience”).


/s if it wasn’t obvious.

When I see the word “facts” used like this, I feel there’s a parallel to the way the word “respect” is used abusively, as outlined in this Tumblr post that has stuck with me for years:

https://soycrates.tumblr.com/post/115633137923/stimmyabby-so...

> Sometimes people use “respect” to mean “treating someone like a person” and sometimes they use “respect” to mean “treating someone like an authority”

> and sometimes people who are used to being treated like an authority say “if you won’t respect me I won’t respect you” and they mean “if you won’t treat me like an authority I won’t treat you like a person”

> and they think they’re being fair but they aren’t, and it’s not okay.

The word “facts” can be used abusively, as in “My facts prove my worldview, your “facts” are arguments based on emotion.”


> instead of confronting facts and reality.

okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?


It's much easier for someone who blurs the facts to keep a clear conscience because they don't have to acknowledge (to themselves) what they've done.

Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.

I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.


I played the bot (probably early 2025) and wasn't that impressed. I won 5-1 or something like it. I did win one or two local chess tournaments in the past but I'm really not an impressive chess player.


Same. I just played it and rocked it and I am a 500 on chess.com. I think this is older version


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: