By default, many compilers include things like local filesystem paths, build server hostnames, or build timestamps into their binary artifacts. These will obviously differ build-to-build.
Even without that, it's possible to accidentally leak entropy into the build output. For example, readdir() doesn't guarantee any kind of ordering, so without sorting the list of files it is possible for a binary artifact (or even tar) to produce different output from the same input.
Over the years, I've had direct knowledge of or involvement with maybe a dozen or so events that have ended up in the mainstream news. And in almost every case it has been inaccurate in some way, sometimes getting basic facts wrong and others having significant bias or spin that was misleading.
This hasn't given me great confidence in the accuracy of the reporting for things that I don't have direct knowledge of.
On cell networks, video content is by far the largest consumer of bandwidth. And the default for video generally is to auto-adjust the resolution to the highest quality that the network supports. This kind of sucks, since bandwidth is a shared resource for all users of a given antenna on a cell tower.
Though Speedtest on your cell might show your connection speed as 100 megabits/sec down, cell networks special-case video by identifying it as video and rate-limiting it to something like 1 megabit/sec. This is considered "efficient network management". For T-Mobile, this based on the plan (https://www.t-mobile.com/cell-phone-plans), they sell either "SD streaming" or "4k UHD streaming". "SD streaming" is a fancy way to express that they rate-limit identified video streams to 1 megabit/sec.
They identify video streams by watching the IP your phone is connecting to and/or the hostname mentioned in the TLS SNI header and checking if it is Youtube, Netflix, etc. Sending video content over a VPN removes their ability to understand what the content is.
Then couldn’t apple add some metadata on the user’s behalf that this traffic is a video stream? Of course it can be spoofed by the user, but if the hard-to-change default is well-defined by apple, then networks can depend and use that info.
It would probably be clearer that they exist if the console redirected to the regional URL when you switched regions.
STS, S3, etc have regional endpoints too that have continued to work when us-east-1 has been broken in the past and the various AWS clients can be configured to use them, which they also sadly don't tend to do by default.
Enabling Warp via the "1.1.1.1" Android app gets me an 8.x.x.x VPN address, at least. This /24 appears to be routed to my city's Cloudflare node, so presumably there's a /24 per city they run this service in.
Running a quick port scan from my phone against one of my machines works, so it doesn't look like they are restricting this too heavily.
And I'm not logged into this app and haven't granted it additional permissions, so I'm not sure they have any idea who I am here.
It's not edge location count that matters, but Cloudfront doesn't use BGP Anycast but rather does a more traditional DNS-based routing and tries to spread the requests across multiple edge locations (even those farther away) for redundancy, intentionally.
When I asked for detail about why they don't use Anycast, the Cloudfront engineering team basically said their customers care more about uptime than latency and that full Anycast was too sketchy. Apparently amazon.com disagrees, at least. I'm also happy getting much lower first page view latency out of Cloudflare.
Having given Elastic's support two tries at different companies, it doesn't surprise me that their business model is failing. Their support was _terrible_ both times; at no point were we ever in touch with anyone who seemed like they understood the product, cared about our issues, or were in any hurry to fix them. We were locked in year long, 6-figure support contracts in both cases, and issues dragged on for months until we basically gave up. We got better answers out of random Google searches and a 20 minute conversation with a friend of a friend.
AWS's hosted ElasticSearch only recently is able to handle the data set sizes we were dealing with, and their enterprise support on this (and other products) is vastly better than anything we ever got out of Elastic.
> We got better answers out of random Google searches and a 20 minute conversation with a friend of a friend.
On the other hand, if you were asking support to train you on "how to do A, B, C" while you didn't even check the documentation nor made a basic google search, I can understand why you were disappointed. Paying more for support doesn't change the nature of it, it doesn't magically become a google bot for you.
Could you maybe give an example on the issues you reported to support ?
If I have a six figure support contract and ask you how to do a simple thing with your product, you better have an answer. If it's so simple, why are you worried about it, when I'm paying for it?
We weren't having issues solved by basic documentation.
In the most recent example, we were occasionally hitting Java heap OutOfMemory under our workloads and wanted tuning or even architectural advice. It turns out that ElasticSearch didn't limit ingestion rate to control memory pressure and was happy to accept writes under load until it exploded. Heavy users of ElasticSearch commonly have to watch ES memory pressure and throttle their own writes client-side.
I would've loved to hear these limitations from elastic.co, be offered some tips on appropriate techniques for throttling, or have them accept a feature request to better handle this server side. We never got anywhere near that level of depth of understanding our problem, after months of trying. It felt like we were talking to first-level support who didn't understand the product much better than we did.
Based on reading the LinkedIn of the Parler execs, their code is node.js and some Go with Cassandra and Postgres for storage and RabbitMQ for queuing. That sounds like it will run anywhere they can rent a pile of Linux boxes.
They've tried to avoid lock-in, specifically mentioning avoiding any Google technologies in their mobile apps. However, they are using Route53 for DNS, Cloudfront as a CDN, and ALB for load balancing, so there are a few commodity services they'll need to swap out.
It takes more than that to host a site like Parler. They need load balancing, a CDN for performance (good luck there), and security protections (DDoS, in particular). You know any site they stand up will be a prime hacker target. They need some good network and security engineers to keep this site up.
Also depends a bit on how they set it up. If it was all done as code or config then they’re in luck. If they just used the GUI console they’re in real trouble. That will take them until past inauguration which I suspect is the point.