Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Right to Lie and Google’s “Web Environment Integrity” (rants.org)
202 points by boramalper on July 30, 2023 | hide | past | favorite | 202 comments


I think the fundamental disconnect here is that Google's view of "user" is a "Chrome/Android User Who Shops from SERP Pages" -- google makes money vs the more nebulous "user" of "the (open) web" which is probably only understood by a few people who were alive in the pre-web world (people 35 and older who were also online).

Google does not care about the later and only wishes to make more money from the former. Google has a clear and blatant monopoly position over ad-based web monetization so most of the web will follow Google's will. We all need paychecks. The group of old farts who saw the world change are growing older and irrelevant.

I am extremely pessimistic about the future of "the (open) web" as the vehicle of our modern low-friction economy as these corporate gatekeepers (Google and Microsoft) are making such big wins recently.

Good luck out there. The World Wide Web (old school) and Old Fashioned HTTP+HTML are under grave threat from carpetbaggers.


Is there any chance of a hard fork? What about, let's say, a web 1.1 where we intentionally remove all the fancy new web APIs and mostly revert back to what we had in the late 90s? Sure, things like video support can remain but all the crazy stuff for building web apps would go away. Let the current web rot away under its corporate overlords and then, maybe, we can have the fork go back into being a fun way of publishing and sharing information.


Have you tried the dark web? For the sake of anonymity, everyone has Javascript disabled when browsing Tor hidden sites — so such sites must be designed to conform to web 1.1 principles.

It's actually a very interesting frontend platform to design for, because you don't get any Javascript support, but you get full modern CSS support.


That's the original vision of hypermedia: cross-linkable book pages who can express metadata about their content while maintaining flexibility in their visual output.

I never thought I'd say this, but HTML4, with all its billions of warts, was a pinnacle of this vision. Later developments swung too hard towards an excessively content-focused vision first (XHTML, the "semantic web"), and then swung all the way in the opposite direction, turning web protocols into dumb pipes for the general-purpose VM runtime that modern JS/HTML engines have become.

Unfortunately, the industry always, always wanted this runtime. Plugins, applets, ActiveX -- people just refused to accept basic form-based interaction. DarkWeb properties accept it only because doing otherwise would be too dangerous, in the same way people don't wear jewelry when going through a ghetto.


Rant: If people using Tor have JS disabled, then why does the Tor Project keep pushing "Tor Browser" as one-size-fits-all program for all Tor users. The program is enormous and seems like overkill for something lightweight like text retrieval. (IMO, it should be anticipated that people might use Tor for lightweight tasks on account of the latency.)

I've been experimenting with DuckDuckGo's .onion site. Below is example of how to search the light web over Tor without Tor Browser.

I'm curious .onion sites because it sounds like .onion solves the reachability problem. Anyone could have a website. No requirement for reachable IP address from ISP. No requirement for domain name and hosting subscriptions. No commercial middlemen. (Assuming Tor network operators are true volunteers.) Not every website has to be commercial or reach large scale.

pts/1

    x=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
    socat -d -d -d tcp4-l:443,fork,reuseaddr,bind=127.0.0.42 socks4a:127.0.0.1:$x:443,socksport=9050 
pts/2

     # Usage: echo query | $0 > 1.htm; 
              links -no-connect 1.htm;
              firefox ./1.htm;

     #!/bin/sh
     h=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
     read x;
     curl -v0dq="$x" --resolve $h:443:127.0.0.42 https://$h/lite/
Using socat instead of curl

     #!/bin/sh
     h=duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion
     read x;
     x=q=$(echo "$x"|yy046);
     export httpMethod=POST;
     export Content_Type=application/x-www-form-urlencoded;
     export Content_Length=${#x};
     export httpVersion=1.0;
     export Connection=close;
     echo https://$h/lite/|yy025|if sed w/dev/stderr;then 
      echo $x;echo $x >&2;fi \
     |socat stdio,ignoreeof ssl:127.0.0.42:443,verify=0


There are a spectrum of privacy-related use-cases that Tor Browser solves; not all of them require that you have untraceable anonymity from the perspective of the server you're talking to.

For example, when using Tor Browser to connect to regular public websites, your goal is usually just to obscure your connecting location, hide your traffic from your ISP, and punch through and firewalls between you and the destination, without first being required to establish a relationship with some specific proprietary VPN company (where doing so might not even be possible if you're in some countries.) So people doing this tend to be willing to enable JS, at least on a per-site basis.

It's only really when you're in the kind of trouble where state actors are trying to correlate your IP address through inadvertent connections you might make to state-owned-honeypot Tor hidden sites, that the full no-JS paranoia is warranted.

But, to vouchsafe the anonymity of those people, it's better for anti-fingerprinting if more people who are using Tor Browser for more mundane things, also have JS disabled. So Tor Browser disables JS by default.

Which means that it's pretty much a given that if you're doing web design specifically for a Tor hidden site, then you have to assume that people accessing your site will have JS disabled. (And that you can't just ask them to enable it — they'll say "nice try FBI" and close the tab.)


Tor Browser defaults to JS enabled.


> Is there any chance of a hard fork?

We'll build our own internet! With blackjack and hookers!

More seriously, I see echoes of the gentrification cycle. At the end of the cycle nobody wants to live in the soulless corporate hellscape they've helped create, so they follow the cool kids to the next up-and-coming neighbourhood. It works for social media sites, so why not for an entire protocol?

If you can figure out a protocol where ads don't work, I'm in.


> If you can figure out a protocol where ads don't work, I'm in.

Is it even theoretically possible to create such a protocol? Preventing tracking is feasible, but telling apart ads from real content does not sound solvable on a protocol level.


I don't believe so. But I also believe that's the root of the problems with the current web.

I believe that wherever we go, marketers will follow us. But wouldn't it be great if I was wrong...


> Is there any chance of a hard fork?

The only hope is anti-trust breakup of Google. Chrome has to be pried forcefully from their hands.

We should launch massive campaigns not just in the US, but also Europe and other critical markets.

We shouldn't back down even if they abandon WEI. They'll just keep trying as they have with AMP, Manifest v2, WHATWG [1], etc.

Google can never be allowed to build browser tech so long as they control search.

The web must remain open.

[1] WHATWG took unilateral control over the HTML spec. They abandoned the Semantic Web, which had led to RSS, Atom, etc., and would allow documents to expose information you could scrape and index without Google Search. Google wanted documents to remain forgiving and easy to author (but messy, without standard semantics, and hard to scrape info from)


The thing that I always wonder about when we talk about anti-trust action to break up big companies like this... how will that actually fix things? If Chrome, Inc. was its own independent company, it would still be incentivized to do things that Google Ads, Inc. wants.

Mozilla is barely able to fund itself, and a big chunk of that funding comes from Google. Surely Google is careful to avoid any overt impropriety with that relationship, since they don't want to come under more regulator scrutiny. But why would a Chrome, Inc. care about that sort of thing? They'd take the same money from Google Search, Inc. that Mozilla does to keep Google as its default search engine. They'd still happily implement WEI and other garbage that Google Ads, Inc. wants, and Google Productivity, Inc. would still ensure that Docs, Sheets, Drive, etc. are all super-compatible with Chrome (and work with the Chrome team to ensure that's the case), and not care so much about other browsers.

I have no doubt that things would be better if we were to break up big conglomerated companies like this, but I'm not entirely sure that breaking up Google would achieve the goal of helping the web remain open.


There are multiple platforms trying to provide this (neocities most prominently, mmm.page most recently, various others that occasionally get posted to HN). Of course, we don't need a platform; we need a culture, and infrastructure, and protocols, and some balance of organization and search. And have it all not sitting on Amazon's servers. And a way to pay for the parts people can't or won't provide for free.

I want to see it; I don't know the path there.


> Is there any chance of a hard fork? What about, let's say, a web 1.1 where we intentionally remove all the fancy new web APIs and mostly revert back to what we had in the late 90s?

Sure. It's really just a matter of mass appeal. We could fork the existing browser base and eliminate the new attestation API. Some projects are already doing this from what I understand.

What will keep attestation from being used is websites will lose business if their customers can't access the site. We went through this with user-agent string checking in the 90's/00's when IE and Netscape/Mozilla were at war and every site had a very strong opinion on which browser they would support. Even today you occasionally see sites that will hit you with "unsupported browser" errors if you aren't running a specific version of something.

The solution to this was everyone realized they were throwing money away by excluding a large portion of their customer base. At the time no single browser really dominated the market share so it was easy to see that an IE-only site was losing 33% of internet traffic. These days everything is basically chrome-based so this hasn't been as much of an issue.

So in the future we'll see this same thing. Non-attestable browsers will be locked out of attested sites and it will be a numbers game to see if sites want to risk losing these customers/viewers.

At the end of the day, you have to remember that everything on the web is just a TCP socket and some HTTP which is a flexible text protocol. We can build pretty much anything we want but it takes inertia to keep it going.


> non-attestable browsers will be locked out of attested sites and it will be a numbers game to see if sites want to risk losing these customers/viewers.

Rather than being completely blocked, I think non-attestable browsers will be subject to more CAPTCHAs and other annoyances, similar to what TOR users see today from anti-DDOS services. Perhaps ad-supported services using large amounts of bandwidth will decide not to support non-attested users, since the ad revenue from those isn't enough to pay for the bandwidth.

What I would like to see is a solution for anonymous microtransactions, so web sites have a monetary incentive to serve users who want to use a non-attestable browser (and don't want to see ads).


>Is there any chance of a hard fork?

I would like to think so but as someone who's tried to hack on the chromium codebase I'd say it's easier to make a new browser from scratch than to figure out how to make meaningful changes to chromium.


Nothing is stopping you from just not using new features in your website.


The problem is convincing everyone else to do the same, especially against Google's propaganda and the accompanying mob of rabid trendchasing web developers.


This is an even lamer version of the "If you don't like it, just don't buy products from <brand engaging in antisocial behavior>" rhetoric.

Anything that allows companies to exert more control, will be used to exert more control.


I'm down. Sign me up.



Gemini is a joke. The main proponents like Drew Devault chuck a tantrum when browsers allow users to optionally show favicons https://github.com/makew0rld/amfora/issues/199


Gemini is what happens when people are so traumatized by years of the web platform being horrifying, they react to the other extreme and build something so bare-bones (with a policy of not allowing any kind of extension) that it will never become a general-purpose solution that most people will adopt.

And that's fine; if some people want their own sandbox so they can avoid the horrors of the WWW as much as possible, more power to them.

But let's not pretend this is going to be come a widely-used platform where people are going to be able to do but a small fraction of the things they can do on the web. Again: that's fine! But it can't and won't replace the web.


This is fantastic! Just downloaded an iOS client and really having fun going down the rabbit hole.


"http get but we chopped off the low order byte of the return code" is not sufficient or necessary to implementing a non-Googlized web.


A wasm-only web perhaps?


I think you're going in the opposite direction, my friend.


His comment system is currently broken and will just 404 and return you to a URL at https://rants.org/%5Ehttp:/your.ip.addy.here/. So I guess I might as well post here instead,

>My web browser (currently Mozilla Firefox running on Debian GNU/Linux, thank you very much) will never cooperate with this bizarre and misguided proposal.

Mozilla used to be about user freedoms. Lately Mozilla has been a front-runner on turning off and disabling non-TLS just HTTP support. They will likely be one of the first browsers to remove support for it and eventually HTTP/1.1 as a whole. ref: https://blog.mozilla.org/security/2015/04/30/deprecating-non...

Given that HTTP/3 as implemented by Mozilla cannot connect to self-signed TLS cert websites this means the future of Firefox is as a browser that can only visit websites that third party TLS CA corporations periodically approve (even if those corporations are currently benign, like LetsEncrypt). Does this remind you of anything? That's not to say other browsers are better in this respect. Mozilla's Firefox and it's forks are the least worst... it's just everything is getting much worse all together.


Having personally experienced what happens to my webpages when Comcast realizes that it can do whatever it wants to bare HTTP requests all the way up to and including inserting invasive advertisements loaded with arbitrary javascript, I think that "least worst" is exactly the right word for requiring HTTPS everywhere. I do agree that it would have been nice if there was a standard that required encryption without also requiring authentication, but this is the world we live in now.


If it didn't require authentication, Comcast could just MITM you.


Right, I knew I was forgetting something about that. Yeaaaaaah.

And speaking of that, let's not forget about downgrade attacks! Removing HTTP is the least worst option in much the same way that you disallow RC4 when negotiating encryption for an SSH connection.


I also had Comcast do that. It made my Steam (games service) browser un-usable one time. So in commercial contexts HTTPS makes sense.

But there is more to the web than just the commercial, institutional, and the like. Websites that are run by human people without profit motive and without any need to be constrained by the realities of CA TLS exist. The major browsers are all about money these days so they'll prioritize the safety of monetary transactions above user freedoms. But just because this is the right decision for an profit driven company or institution doesn't mean it's the right thing for everyone and should be applied to everyone. In fact doing so will ruin the web.


With advertising eyeballs and third parties in the loop there is nothing that is not "commercial". Do you think that Comcast would have refrained from injecting ads and attack scripts if you'd been browsing Wikipedia or the Internet Archive or your own personal blog instead of Steam? Do you think that they'd have gotten less money for those impressions or extracted less of your personal data? The use case doesn't matter because the attackers in the HTTP vs HTTPS case have their own monetization scheme and their own attack chain and they _do not care_ whether something is commercial or private.


Nearly all commercial services require the execution of javascript. For example, I couldn't disable javascript in my Steam browser. But in my personal browsers I have javascript execution set to temporary-whitelist-only. Much like how I don't open and execute every email attachment sent to me. Comcast's injection attacks actually don't do anything without JS being run. I am not being hyperbolic in saying that 100% of malicious attacks on the web of the type you're floating use javascript and fail without it.

Personal and non-profit (non-incorporated) websites are mostly static html and files in directories and do not require everything everywhere be encrypted and verified. This "execute everything blindly" practice is only accepted because it's required for businesses which require JS execution. I reject this premise because no, it doesn't apply in all contexts and risks of MITM attacks (on personal websites) are very low once you question and mitigate it. The document web is much safer and requires less sacrifices than the application web.


> a browser that can only visit websites that third party TLS CA corporations periodically approve

Er... no. It means that Firefox will only connect to websites that the domain administrator of the system approves of. You, as the administrator of a computer, can install whatever X.509 roots of trust you want. Including a root of trust you own, which can issue certificates for whatever websites you approve of.

Today, where there are residential users who can't get the attention of big companies, you'd probably then run a local forward-proxy that re-wraps connections to sites you trust, with certificates rooted in your root-of-trust.

But this is just a sociological evolution of the original design intent of X.509: where each corporate/institutional/etc domain would directly manage its own trust, acting as its own CA and making its own trust declarations about each site on the internet, granting each site it trusts a cert for that site to use when computers from that domain connect to it. Just like how client certs work — in reverse.

(How would that work? You'd configure your web server with a mapping from IP range to cert+privkey files. Made sense back when there was a 1:1 relationship between one class-A or class-B IP range, one Autonomous System, and one company/institution large enough to think of itself as its own ISP with its own "Internet safety" department.)


> You, as the administrator of a computer, can install whatever X.509 roots of trust you want. Including a root of trust you own, which can issue certificates for whatever websites you approve of.

That is a completely unreasonable assumption. The barriers of entry have been greatly increased.

How many users have devices that they are really administrators of? Fewer and fewer.

What is the technical challenge of setting up your own HTTP server that can be browsed with an off the shelf browser on your local computer?


> How many users have devices that they are really administrators of? Fewer and fewer.

As long as nobody has forced you to join your computer to a domain and accept the installation of group-policy overrides, you're still fundamentally an administrator of that machine.

You might not ever feel the need to administrate it, because the OS vendor is often co-administering the machine (see: Windows or macOS when you use a local account rooted in their cloud SSO) but the OS vendor hasn't restricted you from doing your own administration in the way that a corporation or institution administering the domain your device belongs to would restrict you. You still have the ambient authority to administer your machine, whether you ever bother to elevate yourself or not.

You can still install your own X.509 roots of trust. Even on, say, iOS! (You must administer the iOS device using tools — e.g. https://github.com/ProfileCreator/ProfileCreator — that run outside of the device on a "real computer"; but that's just a fact of history, to do with how system administrators generally prefer to interact with computers, not a property of the target device's security. A config profile is just a file format; if someone ever wanted to make a profile editor that ran on iOS itself, they could.)

(And if we're talking about a machine that is corporate or institutionally controlled? Well, then it's the responsibility of the people who manage your device — your IT department — to decide whether a given cert should be given trust. Like it always was under X.509.)

> What is the technical challenge of setting up your own HTTP server that can be browsed with an off the shelf browser on your local computer?

The approach where you run a proxy that wraps untrusted connections into trusted ones is fully general, but yes, only really applicable to the most advanced users. But then, only the most advanced users really need and/or should want the full power of this approach. Only someone with a lot of experience in network security should consider themselves capable of vouchsafing a non-TLS HTTP connection as worth being trusted. You have to basically come up with a [continuously falsifiable!] "attestation heuristic" for the remote yourself — that it stays on the same IP, that its DNS records haven't changed owner, that the server is still sending the same Server response header, etc.

(In fact, if the point is just to look at old websites that were never updated to use TLS, it's probably better to let someone else solve this specific problem for you, through a full application-layer compatibility forward-proxy service like https://theoldnet.com/ .)

If your needs are slightly weaker — if you can assume that every remote is at least using self-signed TLS certs rather than not using TLS at all — then the problem is vastly simplified: you can directly trust any cert by putting that cert directly into your X.509 trust store (in effect making it a root-of-trust — though it doesn't have the X.509 property that enables other certs signed by the cert to be trusted transitively, so it's a leaf-node root-of-trust. A "stump of trust", if you will.) You don't need to run any local servers to do this. And, at least in some cases (e.g. macOS Safari) it's just a few clicks to get from "this cert is invalid" over to "add this cert as a root-of-trust" (i.e. https://i.imgur.com/IXpF4ld.png).

The only problem with this approach, is that there will be no continuity of identity if the X.509 cert of the remote gets to the end of its lifetime and must be renewed. You must act in the capacity of the CA, figuring out again, from scratch, whether the new remote cert should be trusted.

If your goal is to get together with a bunch of your buddies to escape the "X.509 CA cabal" by doing your own cert signing, you'd therefore be better off not using self-signed certs, but rather creating your own CA (probably an automated one using ACME!); importing that CA cert onto all your devices as a root-of-trust; and then having that CA sign certs for all your group members' sites. Then you'll get all the advantages of regular X.509, just in a sort of "overlay world" where your group's browsers can trust both regular sites and your private-world sites, while regular people who aren't part of your group will see a certificate error when visiting your group's sites (unless they also decide to import your CA as a root-of-trust.)

(TBH, this would be kind of a cool "member's only club" to join. In theory, with sufficiently-advanced ACME probes, you could also enforce whatever properties of each site you liked, at least at time of issuance. You could create an "overlay web" that acts like Gopher/Gemini or whatever else you like, just by doing this.)


You commented on another thread a few days ago (which I also replied to).

I still don’t understand your distain for the idea of a 100% encrypted web.

Rather than saying “does this remind you of anything”, can you tell us what it reminds you of?

I guess the issue is, eventually, CA’s can decide not to issue certificates to certain people classified as malicious/nefarious/etc?

Can you clearly articulate your position on this point?


My guess is that if only encryption were the goal, then browsers should trust self-signed certs or at least upon first visit, present the cert and ask whether to trust it in the future. *

Instead the current system depends on some set of built in trusted root certificate that's run by opaque monopolies (at least pre Let's Encrypt) plus a lot of hassle to add self signed certs if it's even supported at all. (IIRC some browsers like Chrome will ignore system trusted CAs in an attempt to "help the user be more secure" ref: https://serverfault.com/questions/946756/ssl-certificate-in-...)

* There is precedent for this, for things like Remote Desktop or SSH where only encryption is the goal, their default behavior is exactly this: confirm upon first access, and remember the approved cert for the future. You do not need to get your server blessed by a CA to connect over ssh :)


Yeah but SSH and RDP aren't used by grandmas that get their wallets emptied by scammers. Forced SSL everywhere is a good thing.

It's bad that it's run by corporations, but it's still a good thing overall. Maybe it should be run by different people(like IDK ICANN over something like the UN)


Again, what's the risk that a first time visit to a site is going to give you a fake certificate?

OTOH SSL has done nothing for preventing phishing, since no CAs actually verify anything beyond you owning the domain.


Well, any time anyone might be loading up a website for the first time in a coffee shop.

Also, “remember this cert forever” (cert pinning) has been an ops disaster for a lot of sites that have tried it. So in practice “the first time” might be more like every week or every month. What the risk that a coffee shop will not serve you a malicious cert once a week?

Also if they do it and you move back to your home connection… the site is broken there because now it’s returning a different one than was pinned (by the attacker!).


There are plenty of ways to improve security but maintain openness.

I think a good idea might be to have TOFU and self-signed only as a fallback. If there was no initial mismatch, and then upate cert periodically.


This amounts to a new "won't someone think of the children?"

All the little green lock icons in the world haven't put a dent in phishing or spoofing.


Mandating encryption scares me because it limits being able to be seen online to only those with control over DNS.

And there's too many countries that have power to mess with their citizen's DNS resolving, and too many ways for domain names to be taken down.

This is creating a system filled with more absolutes than there should be. And the people doing the encrypting aren't willing to put in any time or effort for other basic affordances. If we could do opportunistic encryption, which isn't really trustworthy (that it's not being mitm) but has many upsides like letting you know you're still talking to the same people for ex - I think if we had an ever more robustening and not ever narrowing stance for what we could do to encrypt the picture of security happening would be much less scary. But we are letting more and more layers of systems have to be involved, with more chokepoints for governments, in a way that seems ossifying & fragile.


There are ways to do encrypted web that don't require a corporation to approve your website every 90 days. If Firefox would change it's release binary's build so the http/3 lib linked accepted self-signed certs the problem would be almost entirely mitigated while still retaining the 100% encrypted web (assuming trust on first use is acceptable to you). But these things aren't going to happen when everyone keeps ignoring and downplaying the implications of HTTP/3 support as shipped now and how it creates a handful of content approval gateways like WEI.

When HTTP/1.1 is a thing of the past and Firefox won't load any endpoint without CA TLS on HTTP/3 then the fact that there are only a handful of corporate entities you can get a TLS cert from means they'll be an even more tempting target for those that wish to apply pressure and restrict access to whatever topics they don't like. It wouldn't be the first time a CA has been pressured to drop a site and it certainly won't be the last if things go this way.

Additionally, it significantly increases the complexity of setting up visitable personal website. There are packages for acme2 and some CAs that can hide this complexity but it is there and does break. It acts as a roadblock to what I see as one of viable contributors to keeping the web open: self hosting.

But again, I brought it up because the original linked article suggests Mozilla would never accept something as bad as WEI. With the way FF HTTP/3 is implemented they already have done something similar in outcome. So I do think we need to make noise about WEI (and HTTP/3).


What happens when CA's say "you can't get a certificate if you supported <insert ideology here>"? Or "you can't get a certificate if you are racist"? Or "you can't get a certificate if your credit score is too low"? Or "you can't get a certificate if your website contains <pornography/warez/p2p encryption/firearms/anything we don't like>"?


Not to mention the CA could potentially, intentionally or not, leak keys and allow governments, hackers or other interested entities to decrypt traffic.

Centralising trust will always be a bad idea, regardless of context.


Most TLS connections these days use cipher suites with perfect forward secrecy so governments won't be able to decrypt the connections without an active MITM attack. Since Certificate Transparency is effectively required for all CAs now, that will leave a paper trail.


This sounds like a great way to get lots of people to run old software. I'm sure most people wouldn't even bat an eyelid when they go on to install an out of date browser to make sure a website they want to visit works.

Security people can complain as much as they want, but it's these kinds of anti-user practices that makes users hate updating.


Security people can complain as much as they want, but it's these kinds of anti-user practices that makes users hate updating.

Indeed, I've always thought the classic saying about those who give up freedom for security is very relevant in the current times. I'm quite certain that it's possible to respect the user and improve security (for the user), but instead they've been using security as an excuse to do worse to the users.


I'd guess that ship has sailed for many people. I never update my "consumer" software because every update makes it worse. I can't be the only one. Nobody is getting any kind of positive reinforcement on updating, best case scenario it does nothing, mostly it makes stuff worse or takes away freedoms.


I'm currently at 399 apps that "neeeeeeed" to be updated.

I manually only update banking apps and the likes.

And if an app forces me to update (lazy API devs!) I usually delete it and find a new one.


That would do the opposite. The main idea of this feature is that DRM and banking sites could block access when they can't verify the browser is untampered with. So your old browser would just be shown an error page telling you to install the latest chrome to continue.


This would likely be a great way to get lots of people to run old software for a while, until criminals take advantage of all those juicy unpatched vulnerabilities and all their devices start showing them ads for penis pills on every webpage and their credit card number gets stolen every other week.


That's my point: these kinds of practices in software updates is why people don't want to update. Look at the people that kept running Windows XP or 7 after eol.

When companies create a culture that updating is harmful to the user then users will learn to not update.


“Least worst” is right.

A quick nod to Tor Browser, the Firefox fork which will always support HTTP in order to support the vast majority of Tor hidden services.


Onion addresses aren't CA-based TLS, but they aren't unencrypted HTTP either.


That would be pretty dumb then because there is plenty of older IoT stuff that you won't be able to access anymore with FF. Sick and tired of all these companies, foundations and other silos telling people what they can and can not do with their own hardware.

If I want to visit scary non encrypted websites I should be able to do so.


I agree you personally should be able to as a Haacker News user with 200,000+ karma.

But I would prefer my grandma be blocked from all non-encrypted sites, sorry!


That can be handled by a setting that defaults to requiring HTTPS. Maybe even make it remember the setting per site.


IMO HTTPS should go in the direction of 3rd party cookies. Disabled by default, but easily available if you know how to change your browser settings to allow it.


What does your grandma think? Did anyone bother to ask her?


My own grandma doesn't know what encryption/HTTP/TLS etc are. I think for those who are uninformed it's a sane default to not trust self-signed certs.

That said, there should be a toggle in preferences or about:config to allow it for those who know what they are doing.


What question would I ask her?

(Seriously tho, how would you frame the question in a way my grandma, whose only access to the internet is via an iPad, could understand)


Couldn't I just stand up a quick CA with easyrsa scripts?


Yes, absolutely. Nobody else will trust it, but you can always set up your own CA for use by computers you use.

Which is fundamentally still better than insecure HTTP, because it's at least possible to take steps to trust it and make sure it's the same server you expect to talk to.


My typical website's visitor is someone on the other side of the earth I don't know and will never know. Getting my root cert in their trust store just isn't a feasible option in this extremely common use case for a public website.


On the topic of user freedom, Firefox also doesn't allow installing extensions not signed by Mozilla unless you use a fork, Nightly, or Developer Edition (which is just a badly named beta)[0]. The hilarious thing is that Safari, the web browser from the company infamous for walled gardens and not letting you control your device, does let you install unsigned extensions on desktop[1].

[0]: https://wiki.mozilla.org/Add-ons/Extension_Signing

[1]: https://developer.apple.com/documentation/safariservices/saf...


This is actually no different from Firefox: "The Allow Unsigned Extensions setting resets when you quit Safari; set it again the next time you launch Safari."

You can run an unsigned add-on in regular Firefox by opening about:debugging#/runtime/this-firefox and clicking Load Temporary Add-on...

In both cases, it only lasts until you quit the app.


And that's what I deserve for not reading the entire page. Thanks for the correction.


"If your computer can’t lie to other computers, then it’s not yours."

This fundamentally comes down to "do you really control your computer, or does someone else?":

https://youtu.be/Ag1AKIl_2GM?t=57


And also (this one was written in 2002!)

- "Who should your computer take its orders from? Most people think their computers should obey them, not obey someone else. With a plan they call “trusted computing,” large media corporations (including the movie companies and record companies), together with computer companies such as Microsoft and Intel, are planning to make your computer obey them instead of you. (Microsoft's version of this scheme is called Palladium.) Proprietary programs have included malicious features before, but this plan would make it universal."

https://www.gnu.org/philosophy/can-you-trust.en.html


Is there a link to an article that actually goes into WEI on a technical level that isn't the proposal itself?

So many things posted to HN about it have been the grand overview, which is a perspective worth diving into but also has drowned out every other perspective to the point where it's very difficult to figure out what's really happening with the proposal here.


Not really. Every explainer assumes the proposal is lying, and explains how half of it means the opposite of what it says.


Because any other option is not sensible:

1. Authors don't understand what tech they use. (I will leave it to your belief - but that would be even worse for Internet)

2. Authors don't understand that something unrealistic is unrealistic.

Idea that you can give companies or corporations tools to check "did user modify his environment" and they would not use it to exclude users is stupid or disingenuous because advocates for this did exactly comment in such way: We want this proposal to do precisely that.

Again Google tries to defend it by saying "we will return invalid 'false' for some of the users/times of Chrome users" [to make sure that website will not do that] which for me is not only bad because it then creates "when google revokes this policy we are in even worse situation" but then leaves the issue how google decides "who" to give back this 'false':

I will reject times immediately not only because this can be easily circumvented by website [check n-times] to detriment of user but it would also contradict official documentation of WEI (same token for same input from user).

And this leads us to another point - if Google wants to return false negatives it would need to either keep information that is supposed to return 'false' - EU will not be very happy with that (also it does contradict this "chrome users"); or more likely it will be implemented in chrome.

Now when we established that implementation in chrome is most probable - we can also establish that:

A) Implement this on profile basis - companies will ask you to reset profile if you are this false negative.

B) Implement on connection basis - companies will ask you to refresh.

C) Implement on device age / os version / type - Google can even make the manufacturers happy with this one.

… as you can see at most this will be nuisance and if by some weird way:

Z) Implement on Super-complicated basis - this will be still possible because…

3. Google plays disingenuous word game with us here by saying - We won't destroy open web

Other Chromium browsers may ignore that Google X% false negative (Google may loose few % of users before it scrapes this policy). And there is 0 need for Google to actually do something when companies will misuse this API.

In simple words the part that should worry you is not that "Google will destroy web by using this API on Chrome and it services", what must worry you is that other companies will do that for Google and Google will wash their hands from this by saying "We wanted good but didn't work". You can see that tone from the Google - We don't want that so we created these "holdouts".

Don't let Google move Overton window, they are proposing thing that any sensible person see as clear cut attack (or stupid idea that can only work this way) on Privacy and Your Right to use Your Device (and for some people Your OS and/or Your Browser) as You want to use. They are at fault here.


As far as I can tell, there are two reasons for this feature, the legitimate one is that users largely have browser extensions which are malware. They may have even been legitimate when they installed them, but then auto updated to be malware later. This poses a problem for banking sites because desktop browsers can no longer be trusted to be secure so they push you to use the mobile app or at least confirm transactions with the app which is trusted.

The illegitimate reason is they can stop ad blockers and content downloaders / DRM bypassers.


What you've written is a great illustration of what tedunangst said people are doing.

The proposal literally is not about extensions. The only context in which extensions are mentioned is to explicitly say that the mechanism is not for policing the installed extensions or other browser features.

So how exactly are you determining that this is what the proposal is really about? It seems that it can only be by ignoring the proposal text and making up your own ideas of what it is about.


I'd avoid taking that route because that would move the Overton window [0] on the issue to Google's side.

The premise is unacceptable and discussion on the technical merits will only give it the fuel to make it more material.

[0] - https://en.wikipedia.org/wiki/Overton_window


If the window even applies here, it expanded the moment Google initiated all of this publicly.

We're in it now, and fully understanding the issue and the problems it purports to solve is incredibly important.

Dialogue is all we have, and to even build a solid argument against WEI, understanding the details matters.


It's worth understanding the problem it purports to solve in order to properly dismiss it. WEI positions itself in a way that sounds ambiguously like it might ever serve the user's purposes, and a clear framing of the problem statement would make it more obvious that it does not serve the user at all.

For instance, one of the framings of WEI is that it gives advertisers a way to verify the client so they don't "have" to do fingerprinting. Except WEI does nothing to take away fingerprinting, so advertisers will then have fingerprinting and WEI. (Even if it did simultaneously take away fingerprinting it would still not be OK, but the current framing is not even offering the user benefit it claims to offer.)


Before taking away fingerprinting there would need to be a sunset period to have everyone migrate over to the new API. Ripping it out before new APIs are available or doing it at the same time is irresponsible.


You can't "take away" fingerprinting, because it isn't a single API or even a single set of APIs. Fingerprinting is a set of techniques; you can nullify some of them, but there is nothing stopping companies from inventing new ones.

Despite what Google say, fingerprinting will never go away - any new feature in that space will just be in addition to existing and future fingerprinting techniques.


I'm not sure how that addresses the point you're responding to: regardless of the excuse, WEI with fingerprinting is bad, and WEI without fingerprinting is also bad, and fingerprinting without WEI is also bad

doing bad things (for example, any of the 3 above options) is more irresponsible than implementing bad things poorly


So how do you intend to prevent fraud and abuse? You need signals of some kind to know whether a visitor is likely to be malicious or not.


Let me ask you instead: "how you intend to prevent matrimonial fraud?"

You can not. Because that is not how world works - crime did happen, happens, and will happen. You can stop attempts or fight it by education or by arresting criminals but never eliminate or prevent it.

"You need signals of some kind to know whether a visitor is likely to be malicious or not."

Innkeeper in 13 century: "I need signals of some kind to know whether a visitor is likely to be malicious or not before he enters." - See how ridiculous it is?

You are being being wrong here - you don't have 'visitor' but 'client' and as far as I'm concerned you do not spy on every client to "know" if he "is likely to be malicious or not". You may be suspicious of client activities but that is what you can realistically do.

Also define "fraud and abuse" because this can mean many things which may have many solutions.


>Innkeeper in 13 century: "I need signals of some kind to know whether a visitor is likely to be malicious or not before he enters." - See how ridiculous it is?

That innkeeper would have used the signal of how the visitor looks. If a visitor was being malicious and got kicked out, that person could not just come right back in as the inkeeper would be able to tell that he was the same person as before.

Also your metaphor is not great because WEI can only happen after the first page load since it requires the site to use javascript to use the api.


>That innkeeper would have used the signal of how the visitor looks

Not like that ever backfired.

>inkeeper would be able to tell

assuming that he have photographic memory and visitor did not change his looks by any way (shaving, getting scar etc.).

if you have to refer to 'looks' you have IP - it has exactly same power as signal as looks.

>Also your metaphor is not great because WEI can only happen after the first page load since it requires the site to use javascript to use the api.

for user that is irrelevant point. As people do not care about how you run server - only if they can use the page.

To put simple - I run My browser of choice on my OS of choice with my stack as I wish (otherwise I wouldn't run Linux). The idea that server owner should get any say about this is simply intrusive.


You don't have to worry about ad fraud if advertising goes away.


an interesting question

maybe your wish is possible, maybe not at all!

in any case, WEI and fingerprinting are both still bad regardless of the answer anyone might provide


Agreed. If WEI means browsers can more aggressively restrict fingerprinting in all for it.


There is zero connection between WEI existing and browsers being able to restrict fingerprinting.


There are various stakeholder interests which need to be balanced. As much as people like to invent nefarious motives, in reality there likely important players on the internet who have valid use cases for fingerprinting which cannot be broken carelessly. Efforts like WEI try to address their needs via alternative means. I agree that it's bot a guarantee, but it certainly seems like a possibility.


"There are various stakeholder interests which need to be balanced,"

said the giving-alcohol-and-cigarettes-to-children lobby,

"As much as people like to invent nefarious motives, in reality there are important players manufacturing alcohol and cigarettes who have valid use cases for children consuming them which cannot be broken carelessly."


Is there really much browsers can do to actually effectively restrict fingerprinting without going all out like Tor Browser? WEI may disincentivize websites to not use fingerprinting, but if they really wanted to, they could use it for de-anonymization purposes.


I don’t think there is any indication that this would be the automatic outcome.


But the details aren't the issue. It's the entire idea of remote attestation that is repulsive and user-hostile.

Otherwise I agree that examining the details doesn't move the Overton window - the broad idea already did that.


Details aren't the issue and that's exactly my point.

There is no reason to discuss on average how many lightning rods we should place on each street and the necessary budget allocations and compromises that would need to be made and how this whole undertaking would be facilitated.

We have been doing just fine without these lightning rods on every street proposal Google made. We knew we could have it, we don't have it because we don't want it.


The problems it aims to solve seems like an issue for google or those that care if their ad is viewed by a human or not. Can someone give a counter argument of how this might benefit the users themselves?.

Chrome on android is completely unusable due to no adblocker, is this what WEI will do everywhere?


>Can someone give a counter argument of how this might benefit the users themselves?

Remote attestation might reduce the amount of cheaters in games and fraud in banks if implemented properly. So, through potential indirect means.

I don't think any of this is worth the loss of user freedom and functionality, though. So I will vehemently oppose WEI and similar to be used outside of internal facilities.


Anti-cheats are already remote attestation in theory for online games and they are unable to protect the process perfectly. They are unable to detect alterations to the process memory and be correct in their attestation at all times.

And cheating isn't a significant problem in in-browser games, it's not like anyone modified the browser source to create cheats for in-browser games. And for competitive online games, we have a great imperfect attempt at showing what remote attestation would look like. Anti cheat in games.

We have AAA titles that work flawlessly under Proton but you can't play them on Linux because anti-cheat needs to do DKOM on your Windows kernel before it lets you play.

I agree with the rest of your post though.


I'm not trying to understand it's "technical merits" but what exactly it is. Even experts on The Registry are quoted saying it's "nebulous".

So what exactly are we even talking about here? The idea of attestation or just this proposal or is it a Google thing? Does this compare to cloudflare private tokens or safetynet or are they completely different? If proposal goes through what does that functionally mean for browsers both ones based on chromium and ones not?

I don't know why it's so difficult to find these details and I'm instead being told to just accept the idea that the premise is unacceptable.


Wow, if an unbiased independent news source as credible as The Register is against it then maybe I should reevaluate!


I figured I’d take a minute to try and find the proposal itself, so I could see what the proponents considered the virtues of this to be.

https://github.com/mozilla/standards-positions/issues/852 https://github.com/RupertBenWiser/Web-Environment-Integrity/...

I stopped reading after the explainer’s intro section. The first example is making it easier for websites to sell adds (lmao) and the other 3 are extremely questionable whether if the proposed remedy even helps. And it’s presented as a benevolent alternative to browser fingerprinting, as if we must choose between these two awful choices. It’s an absolute joke of a proposal.


May I suggest something like "Enterprise Environment Integrity". How does the public know that the enterprise (i.e. google) it's dealing with is healthy?

The public should have an entity that will receive detailed attestation data to assess that. Failing the attestation will revoke business permit along with an announcement.


> How does the public know that the enterprise (i.e. google) it's dealing with is healthy?

Because they will pinky promise.

I find it funny that for some reason companies get the benefit of the doubt when it comes to dealing with data in a responsible matter. Yes, it's possible that they do. But it is also possible that they don't and no matter what they say in public that's just words, it doesn't prove anything about what is really going on and that's before we get to honest mistakes.

There is simply no way to be sure, all you know is that once you transmit data to any other host on the internet that it is quite literally out of your hands whether or not that data will one day show up elsewhere.


> Because they will pinky promise.

It goes a bit deeper than that. Many companies these days "choose" to get certified under a variety of standards (the most common one is ISO 27001), everyone who hasn't been completely ignorant is looking for or already got cybersecurity insurance and on top of that comes the entire GDPR saga. Basically, you got three levels of auditors that at least make sure the basics are covered, and on top of that come industry specific requirements such as TISAX [1], US SOX Act compliance or whatever AWS had to go through for GovCloud.

[1] https://en.wikipedia.org/wiki/Trusted_Information_Security_A...


I'm well aware of those audits and the typical auditor output. Where to start...

These audits check paper. They don't check what is actually going on, they will check that documentation is in place and that processes and various controls are in place and that periodic checks have been performed. They don't actually check any of the underlying tech. For instance: ISO27001 asks for 'regular pentests'. But it doesn't do anything to ensure that those pentests are of a good quality. There are plenty of 'check the box' automated pentest services that get you past that hurdle but that won't do much for your security. They might even give you a false sense of security.

So yes, there are all kinds of audits. But like with everything else the devil is in the details and the quality of the work is very important (obviously!). Many companies treat these certifications like a 'license to operate'. You need to have them so then you do the absolute minimum required to satisfy the auditors rather than to see the certificate as a minimum level of proof required that you have your house in order and that your intent is to deal responsible with your data subjects (and customers) data. The latter is extremely rare.


Yeah, ISO 27001 is paper pushing, but TISAX audits can be pretty painful and detailed (I've been tangentially involved in one, but can't say more). I think insurances are the player to watch the most - they have actual leverage on their customers and really really want to avoid having to pay out for claims, so their demands start to get more and more intrusive, to the point where they can and do collide with stuff like employment laws (e.g. monitor all employee communications through a mandatory proxy service, even if the device is not on-prem).


So do TISAX auditors actually go into systems and check stuff or do they talk to people and look at documents?


Not sure if I can answer that without violating some NDAs, but I think it's safe to say that it's actually worth being called an audit.


Ok, I will do some reading then, thank you and no worries, I understand about the NDA but this being generic information I was hoping you could answer it. But I totally get that you would not take any risk with that.


It doesn't matter to the public. Each site chooses what attestors it trusts and the site can keep track of how useful that signal is. If the signal turns out to be useless the site doesn't have to use it for anything or can stop collecting it.


> In the normal world, you show up at the store with a five dollar bill, pick up a newspaper, and the store sells you the newspaper (and maybe some change) in exchange for the bill. In Google’s proposed world, five dollar bills aren’t fungible anymore: the store can ask you about the provenance of that bill, and if they don’t like the answer, they don’t sell you the newspaper. No, they’re not worried about the bill being fake or counterfeit or anything like that. It’s a real five dollar bill, they agree, but you can’t prove that you got it from the right bank. Please feel free to come back with the right sort of five dollar bill.

Side note: This at least would occasionally happen if you tried to spend Scotland or NI £5 notes in England.


IDK why people try so hard to cram metaphors in to things, especially when the metaphore is more confusing that the thing they are trying to explain. It's not at all like currency and fungibility.

It's like Android SafetyNet where apps can work out if the device is rooted and running custom software underneath the browser/app.


The reason for using metaphors is because most users don't understand the more direct comparables.

Pretty much all non-technical users -- and a great many technical users -- have never heard of SafetyNet and don't know what it does.

Metaphors are imperfect; that's inherent to their very nature. That doesn't make them useless.


You don't even need to know what safety net is, the description after completely summarizes it and makes way more sense than talking about cash which I can't see any relation to the original topic.

I'm not sure how a non technical user is meant to get any kind of understanding of WEI from that metaphor. It can be explained quite simply "Websites can check if your browser has extensions or if your OS has been modified, and refuse access based on this information"


But that "explaination" doesn't really mean much to most people. It says words but does not say why they are bad.


It tells you exactly what is going on. It's up to people to decide for themselves if that's bad or not. I can't see any way to understand WEI going by the metaphore of buying newspapers. How does that explain anything?


It tells me what's going on, which is useless because I already knew whats going on. It does not tell anyone who actually needs to be told whats going on.


Tbh, in practice that really has something to do with counterfeiting worries.


Devil's advocate: Is WEI not tackling counterfeiting of a different kind?


The difference is the cultural expectation. The web has always been like Venice's Carnival: people wear masks, some very elaborate, some very thin. Now Google wants to go around ripping masks from everyone's faces, because "we are doing some serious business here". It ruins it for everyone.


It's like a double edged issue. On one hand, yes it is actually useful for saving users from logging in to their bank on a malware infested computer. But it also completely kills the last bit of computing freedom and will be used to kill ad blockers and to enforce DRM.


That's closer to my inability to spend US dollars in England. Different countries have different currencies.


No it's not, Scottish bank notes aren't of a different currency - they're still pound sterling. The reason they're typically not accepted in English shops (at least, those not on the Scottish border) is most often because they're rather uncommon so it's more difficult for cashiers to detect fakes. My understanding is also that some banks, when depositing, require the English and Scottish notes to be separated and may charge a fee to convert them to English notes, so it's more effort to accept and handle them.


Can you (British equivalent of Venmo/Paypal) Scottish pounds to someone who receives them as English pounds?

I don't know why Britain doesn't just implement a national printing press.


There's no such thing as Scottish pounds or English pounds, the UK uses pound sterling (internationally abbreviated to GBP).

There are English banknotes and Scottish banknotes, but they're the same currency.


But banknotes are literally issued by private banks? Why not have a national mint that covers all GBP?


I can't convey how disgusted I am at the thought of WEI becoming a reality.

It will lead to three webs: the remainder of the open web, the new closed web, and the pirate web.

Personally I'll do my bit to preserve openness, even if that means working socially and technically to support the new world of piracy. It will always be a losing battle without institutions fighting for openness, though.

This is a moment when Sun's old line - "the network is the computer" - starts to look hideous and dystopian. Prophetic, but maybe not how we thought.


It's not immediately obvious to me that the closed web will have anything good on it. People that want other people to see their stuff won't lock down who can visit, it seems like it's mainly for ad supported crap? Optimistically, the web will break apart into some AOL Disneyland Cable shit experience and an actual good internet whose participants are not just pretending to have engaging content so they can get ad views. I know that sounds too optimistic, what's the flaw in it? Google will use it's monopoly on a few things to push it, I'm happy to move away from gmail and I don't use Google search anyway. What other practical changes will there be?


> What other practical changes will there be?

Your online banking will stop working on your unapproved software, just like your baking app stopped working on your rooted/old Android phone some 3-5 years ago.


Every website with google ads becoming unusable. And also, every site using google captcha.


This has given me an idea for an SEO-spam-proof search engine along the lines of Marginalia: crawl the internet for sites with no Google Ads


In other words, Google earnestly believes your browser belongs to them and your just using their tool. They're not really wrong either. What'd we think would happen when Google (an ad company) dominated browser market share ...


If it belongs to them, then they assume legal liability for everything my browser does, right?


That attitude should have been apparent from the fact that you can't even change the new tab page, and a thousand other things.


You can change the new tab page via extension. E.g. https://chrome.google.com/webstore/detail/empty-new-tab-page...


Try that on mobile with their browser.


Google is edging towards believing that the internet belongs to them.


I suspect they already believe that.


I suspect they're largely correct about that, sadly.


Render unto Caesar...


The underlying hostile technology is "remote attestation" and it's what we should all be fighting against.

People justify the latter by speaking about companies wanting control over employees' environments, but IMHO that shouldn't be allowed either. This is also why "zero trust" is problematic; they want to replace humanity with centralised control.


Yeah, cryptography is bad and consumers shouldn't be allowed to prove their browser hasn't been modified or have safety when using biometrics or making payments.


"modified" according to whom? Users should be able to use software the way they want, not the way Big Tech dictates.

Governments are scared because they realise cryptography is being used against them.

Now users should realise the same.

Strong encryption is classified as a weapon for good reason. Just like you'd want a gun in your hands, but not one pointed at you.

Then again, the fact that you call them "consumers" instead of "users" already shows what side of the debate you're on.


I'm sorry but there's a class of users who want to be able to prove they haven't made modifications and they refer to themselves as consumers. They're the people who don't jailbreak their devices for exactly that reason and this protocol will support them. Oh, and they happen to be the bulk of the users.


> I'm sorry but there's a class of users who want to be able to prove they haven't made modifications and they refer to themselves as consumers. [..] Oh, and they happen to be the bulk of the users.

No they don't. No single person on this planet wants this. What they want is to be allowed to buy/stream/consume stuff with as little friction as possible.

You're barking up the wrong tree mate.


You know how you can pay for stuff at stores with your phone? That didn't just magically happen. It wasn't just "the technology got better" or whatever you tell yourself. That was called Tokenization and what made it possible was cryptographic hardware being rolled out to the masses. They love this stuff.


You've got this all backwards and you even proved their point. People don't _love_ tokenization, or DRM whatever. They LOVE the removal of friction, like buying things with their phone removes friction of carrying a wallet or a card or whatever I guess. On the other hand DRM makes things easier on the company, but as a consumer I don't really give a fuck about them or what extra work it causes them. In fact, they better have to do something otherwise I'm not paying for anything and they're just collecting rent for something I should be able to attain for free.


DRM makes it possible for customers to rent products they don't want to own. If that isn't "removing friction" I dunno what is. Tokenization similarly makes it possible to buy stuff and not have to worry about what the merchant will do with your info. All possible because of cryptography hardware. The alternative is subscription platforms, where the payments happen behind the curtain.


You seem to be conflating loving the result vs loving the process that enables the result. Regular people mostly do not care about anything except the result. A simplified example: People love driving, they do not love the DMV and getting a driver's license that enables them to drive. Remote attestation is the equivalent then of saying ok we can't tell if the person who drives the car has a drivers license so lets mandate that a license cert is embedded on a chip and we'll implant that in you when you get your drivers license and it'll tell the car you're licensed. You're sitting here saying "people love getting implanted! See they're all doing it and driving their cars!" and some of us are like.. dude no that is so fucked up I don't want mandated anything implanted in me but I still have to be able to drive a car. I guess if you force me to do this at risk of being homeless or disabling my freedom of movement or at gunpoint I have no real choice. It's antithetical to the human spirit.


and what you're doing is getting all butt hurt over a technical solution. Cryptographic hardware is good.


Good for whom? Good for me? No. Good for totalitarians and their bootlickers? yes.


I think in internal environments like within a company it's fine. Just not in the public, user-facing web.


My web browser (currently Mozilla Firefox running on Debian GNU/Linux, thank you very much) will never cooperate with this bizarre and misguided proposal. And along with the rest of the free software community, I will continue working to ensure we all live in a world where your web browser doesn’t have to either.

That depends on Mozilla. As long as our software comes from corporations, we will just be reduced to begging.


We have come a long way since "don't be evil", would be funny if not so sad..


I'm not sure the title is helpful, or the analogy.

It's more like, if you want to borrow a book from the library, you have to bring an FBI agent home with you too, so they can certify that you don't have a photocopier or scanner (or even a pen and paper), that only you can read the book, and not another family member, that if you want to read aloud, your windows can't open and let anyone else listen in, that you read it from cover to cover including the back-page ads for other books in the series, that you can't leave home with the book, to re-lend it out, and so on.

NO. Not on my machines.


On the one hand, I firmly do believe that we need a proper way to verify identity globally over the internet. The Turing Test is over and AI is going to destroy every user-submittable form online.

On the other hand, it's infuriating that advertising is the first front in this war. I specifically don't want advertisers to have my identity. I'm fine with like my Mastodon server or a site like HN to know I'm me because I'm actively interested in interacting with them. I don't want to interact with advertisers, or for them to have my identity, but they're going to wall off half the internet for people who opt out.


WEI is not really an identity system. WEI is more like a binary "is this a real browser" signal.


On the internet, no one knows you're a dog.

https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...


Once WEI goes live: “On the internet, everyone knows if you’re using a stock Android device”


Funny that this was cross posted to fediverse, a network that is heavily reliant on digital signatures to prevent lying.


The point is it’s optional, right?


Good luck getting another server to accept your post without cryptographic attestation of its origin.


Android apps can use safety net for a similar purpose but most don't. You are spreading FUD.

Additionally, if a place of business asks you for state id, you are free to choose not to share it. It's not your choice whether they get to ask you nor your right to provide a fake id and expect it works. Views on the website make it sound like everyone should be allowed to own a gun without a licence. I bet most people don't feel that way in reality.y


people built internet just to destroy it. its not a tech problem. its economics. the forever decreasing profits because more people just know and re-sell for less. tor will become more and more expensive till people realize that needs the same hardware trick with another brand like internet2,3,4,5,... to seek the solution by hacking wont stop this foverer battle. its just ends when the imaginary of fair price for information meets the offer (the forever decrease of population to let universe in peace)


The wei camp wants a world where you can have your open source OS with encryption that actually works but almost any commercial website won't talk to you.

Wei will get integrated with your CBDC.


Upvote if you actually read the proposal in question.


The premise of this article is fundamentally wrong.

> On that Web, if you send a valid request with the right data, you get a valid response.

Explain DoS protection then.


"By analogy: right now, you can tell your browser to change its User-Agent string to anything you want."

You can also choose not to send this header. By default I do not send it. The RFCs do not require it^1 and very rarely do I find sites that do. When I do find such sites,^2 I just add them to the proxy config so that a UA is added on the way out. Almost invariably these sites will accept a made-up UA, so long as it is well-formed, which is interesting.^3 It suggests no one knows what new UA strings will appear.

The origin of changing the User-Agent header dates back to one of the earliest browsers, written in part by a well-known Silicon Valley VC.^4 It was always possible for the user to control HTTP headers such as UA and the designers knew it. Later in the "browser wars" Microsoft changed its UA header to match Mozilla's.^5

1.

https://towardsdatascience.com/the-user-agent-that-crazy-str...

2.

For example, www.federalregister.com and sec.gov.

3.

Many users want to "blend in" and use common strings so perhaps use of made-up strings remains largely untested.

4.

https://raw.githubusercontent.com/alandipert/ncsa-mosaic/mas...

5.

https://webaim.org/blog/user-agent-string-history/comment-pa...

https://humanwhocodes.com/blog/2010/01/12/history-of-the-use...

As for WEI, I'm inclined to think that the terms "abuse" and "fraud" in the spec may actually refer to ad fraud, including potential fraud by Google itself in marketing its ad services because it hides the true extent of ad fraud from its customers.

People who like to access the web with uncommon TCP/HTTP clients may not be a significant problem. There are no details given in the spec about this alleged "fraud"; perhaps that's intentional. Although by being vague in the spec, real humans that prefer not to use popular browsers may jump to conclusions.^6

It could be that we're on the cusp of exposing the true extent of ad fraud with respect to Google, and the ultimate unworkability of Google's core "business" (selling ad services). Perhaps Google believes its advertiser customers could begin to lose trust; maybe direct more ad spend to Apple.

6.

Some pre-spec discussion: https://groups.google.com/a/chromium.org/g/blink-dev/c/Ux5h_...


For crying out loud, Google


What is so wrong with Web Environment Integrity? It's a great policy, and the notes CLEARLY outline how the main benefit is going to be for the users and website developers. If you're not doing anything wrong, then you shouldn't have an issue.


At no point does he explain what the hell the rant is about.


[flagged]


Where are you seeing that? This article discusses lying about one's user agent string, presumably to get better behavior out of a website that's making bad decisions based on it. That is not a crime last I checked.


>Not looking at ads is a crime.

Oookay, I had enough HackerNews for today.


I was just thinking.. must have found the google employee. Checked your profile haha yep.

about: Formerly of Google and Quip.


Almost anything one can do, of value to their fellow humans, is a crime somewhere.


"crimes" according to who?


There is no right to lie. There is a right to remain silent. That is what "Web Environment Integrity" threatens.


There is a right to remain silent. That is what "Web Environment Integrity" threatens.

Google's WEI doesn't threaten your right to be silent.

Based on Google's previous behavior, if your web site doesn't go along with its plan, it will be more than happy to silence/delist/derank it.


That's the thing though. Even though Google's search quality has diminished, and they attempt to enforce things like WEI or other abjectly terrible things, they remain the market leader in search, and this threat still holds a lot of weight. The amount of inertia to go against this change feels Herculean, and I'm honestly not sure how we would go about educating the vast majority of the Internet's user base to care about it.

Sure, as the technologists we are, we can see just how dangerous this can be, and why it's offensive to even consider. Without getting the masses on board though, we're pretty outmatched and outgunned.


You need to include the first phrase. It is the right to lie that is being threatened.


It threatens the right to not disclose certain information in your user agent.


I disagree, you have the right to set your user-agent to anything you’d like, or nothing at all.


I am not sure where you are getting this as a right, because things like fraud are illegal. It might be good policy but where is the right?


Agreed there is no explicitly enumerated right, but in the context of computing, it is not illegal for a computer to transmit “untrue” information. Fraud is a significantly higher bar. I am not a cyber-lawyer.


In the U.S., most rights are not explicitly enumerated. People are assumed to have all conceivable rights, and the restrictions must be explicitly enumerated. There is no law against lying in general.

There is, however, an explicitly enumerated right to freedom of speech. There are only a handful of recognized categories of non-protected speech. Lying isn't one of them.


In American law, the right to speak is assumed. It is the lack of a right to speech that is the exception. There are only a handful of categories of unprotected speech, which can be easily enumerated, and lies aren't one of them.

Fraud, defamation, perjury, and obstruction of justice are illegal. That's a much narrower restriction than a ban on lies.


You cannot keep me from lying. You can only hold me accountable if my lies do harm. So I would take it as a "natural right" even though I realize those are contended.


i found a page with some helpful suggestions for user-agent strings that could be adopted by default , ideally at ^scale^.

Google Crawlers and User Agent Strings – 2023 List

https://www.stanventures.com/blog/googlebot-user-agent-strin...


> That is what "Web Environment Integrity" threatens.

This is the important part.


Actually, it’s not a right to remain silent that is threatened. It’s a threat to refuse to let anybody else speak to you or you to anybody else unless you first give Google enough info that they can silence you.


There is no right to remain silent in the United States. Courts can compel testimony.


> There is no right to lie.

Tell that to the police.


> If your computer can’t lie to other computers, then it’s not yours.

And why is that not okay?

I think this sort of attitude is left over from when computers were expensive. Nowadays, I have multiple computers, some of which are fun toys I mess with, while others are appliances that I just use for their intended purpose. And that's fine, because when I screw up, maybe I don't want to have broken the computer that I use for video chats and to do my banking? Maybe I don't want my main phone to stop working?

It's okay to be a hacker and buy a router that you just use as a router and a Chromebook that you just use for web browsing. You can also buy a Raspberry Pi and mess with embedded programming on cheap devices. The appliance computers should be as low-maintenance as possible so you have more time for hacking.

The nice thing about really cheap devices like a Raspberry Pi Pico is that if you actually build something useful for real work, you can deploy it, stop messing with it, and buy another computer for experiments.


You're absolutely welcome to choose to have an appliance; for some purposes that may be desirable. Don't tell other people they can't have a general-purpose computer.


I didn't. I said it's okay to own non-general-purpose computers. The article claims it's not.


> Maybe I don't want my main phone to stop working?

You are conflating the alleged benefit of locking down devices to assure users don't break them [1], with websites and services getting the ability to remotely verify your software/hardware stack is "approved", and block you if it isn't.

It's not about what you want - it's about taking away your ability to choose. The "fun toys" that can be modified to your liking will get increasingly useless as they'll be blocked from large chunks of the web, especially after Google will start pushing WEI if sites want ad revenue, under the logic of preventing click-fraud.

[1] There are plenty of ways to limit unlocking to the technically-savvy, and making it tamper evident to the owner (e.g. a "bootloader unlocked" notification during boot), and many existing phones implement them, so any claims by phone or other device manufacturers that making devices impossible to unlock are outright lies.


I don't understand the point you are trying to make and how it relates to the quote or the post as a whole.


Your bank and the company that hosts your video chatting software should pay for the computer, if it isn't yours.


Why? Even if you leased it, you're still paying for it in the end. You're benefiting from using it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: