United seems to like to hang onto extremely old airplanes even as the number of these disruptions mount. We can argue how statistically they're safer etc but these events are extremely unsettling and disruptive for passengers and frankly it's lucky no one's been killed yet. One of these planes dropped a wheel on a parked car at SFO last year.
It's not hard to notice there are other major airlines that generally maintain newer widebody fleets.
Genuinely curious - what do people want to see from a new/different rendering engine?
The web is crazy complex these days because it is an entire app platform.
The incentive for anyone building a browser is to use the platform that gives you the best web compat especially at the outset when you don’t have enough users of your app to be able to make big changes to the platform. Even Chrome didn’t start from scratch - it used WebKit!
The Chromium community has built an excellent open platform that everyone can use. We are fortunate to be able to use it.
> trusted with big, critical open source projects.
You talk as if the community has appointed Google to take care of these projects. Google is spending $$$ writing code and open sourcing it. Not the other way around.
And as with anything open source, if you dont like the direction of open source code - fork it.
If I have an open source project, you dont say 'bitpush cant be trusted with the project'.
The Play Store services are not a critical open source project, though. The AOSP is still intact and maintained in accordance with the licensing.
The application signing backtrack is an issue, but more of a political problem than a technical one. America's lesson here has been written on the wall for years: regulate your tech businesses, or your tech businesses will regulate you.
> Genuinely curious - what do people want to see from a new/different rendering engine?
It should be fast when rendering HTML/CSS. I don't really care about JavaScript performance, because where possible I switch it off anyways.
It should be customizable and configurable, more than Firefox was before Electrolysis and certainly much more than Chrome.
It should support addons that can change, override, mangle, basically do everything imaginable to site content. But with configurable permissions per site.
It should support saving the current state of a website including the exact rendering at that moment for archiving. It should also support annotations (like comments, emphasis, corrections) for that. And it should support diffs for those saved states.
And if you include "the browser" in that:
I want a properly usable bookmarks manager, not the crap that current browsers have. Every bookmark should include (optionally, but easily) the exact page state at the time of bookmarking. Same for history.
Sync everything to a configurable git repo: config, bookmarks, history, open windows/tabs, annotations and saved website snapshots.
I want easily usable mass operations, like "save me every PDF from this tab group", "save all the pictures and name them sometopic-somewebsite-date-id.jpg" or "print all tabs that started with this search and all sites visited from there as PDF printouts into the documentation folder".
I want the ability to watch a website for changes, so the browser visits in the background and notifies me if anything relevant is different (this could be a really hard thing to get right I guess...).
I want "network perspectives" (for lack of a better word): show me this website as it would look from my local address, over this VPN, with my language set to Portuguese, ..., easily switchable per tab.
I want completely configurable keybindings for everything, like vimperator, but also for the bookmark manager, settings, really everything.
> The web is crazy complex these days because it is an entire app platform.
I'd prefer something that's not crazy complex, that's not "an entire app platform" designed and implemented by Google. Google essentially controls the W3C (Mozilla would vanish if Google stopped funding it), and controls the monopoly rendering engine.
Half of websites are better without JavaScript and web fonts, and 99% are just text, images, and videos with maybe a few simple controls. For the other 1% I can fire up Google Chrome and suffer the whole platform.
I want a web rendering engine for the 1%, that does the simple stuff quickly and isn't a giant attack surface around 30 years of technical debt and unwanted features calling itself an "application platform."
This actually reminds me that early in the HTML5 era one of its key selling points was that you could play videos using just the <video> element. There would not be a need for Flash, Silverlight or JS. However these days it is extremely rare to come across a site that can successfully play videos with JS turned off. Complicated JS has de facto become a requirement for videos but it doesn't have to be.
I too have nostalgia for a time when prices were reasonable, politicians didn't philander and children respected their elders.
And yet here we are :-)
For what it's worth, despite it being /en vogue/ to rag on Google, the Chrome team has some of the most talented and dedicated folks focused on building a vibrant and interesting web for most people in the world.
I think the concerns are not about feature requests but about leveraging embrace-extend-extinguish dynamics to push the web as a whole closer to being locked into dependence on Google as a platform. There are mountains of articles on the topic, ranging from ad blockers to privacy to DRM. But the critiques are old news to anyone who's been following the topic for a while.
Incognito clearly states how it works every time you start it, including what it doesn't protect against.
If we're saying that developers can't clearly and obviously state how things work, and are instead bound by however people think they work based on not reading anything at all, we're in a lot of trouble.
Though... can we at least then get rid of every intrusive TOS screen and cookie banner in existence? Because people click past all of those too without reading them.
I'm about a half hour into this, and listening to Marc talk about newsgroups brings strong pangs of nostalgia. These days I'm a bit of a greybeard (salt-n-pepper beard?) of web browsing, but I remember getting started in the late days of Netscape, as a teenage open source hacker discovering all the Netscape engineers sitting on the npm.* newsgroups.. how wild it was to be able to turn up there with a question about the browser you used every day and have someone working on it answer! Netscape didn't survive, but what a legacy.
That world lived on for quite a while through different mediums. I remember joining the webkit IRC channel in the early days and being full of wonder that folks like Hyatt were just hanging out willing to chat with me and answer questions.
There's something really special about the community and openness of folks who work on web browsers. Maybe it traces it's way back to the newsgroups.
The hierarchy there was basically a reflection of the company's browser team org chart. You could find a group for every team working on the browser where many of them were having their regular technical conversations.
Just now I am realizing that Slack is a lot more like a Usenet client than it is like an IRC client.
I mean. It’s still very far from actually being NNTP, and it’s not decentralized like Usenet or anything like that.
But all this time I’ve been thinking of Slack as “better IRC, with images and links and threads”.
When really Slack is more like “fancy Usenet service with client that renders images and other attachments”. (Although on the protocol and server and client implementation level it is very different from NNTP.)
Well. At least we don’t have to inefficiently yEnc encode attachments nor to split attachments into a bunch of pieces with par2 files. So there’s that.
node.js and Netscape are about 20 years apart ;) I also don't remember an npm. newsgroup hierarchy. As a teenager during that time I recall some binary newsgroups though :)
Have a BMW X5 with this auto-steer nonsense and have had several incidents where it's abruptly turned the wheel, where if I weren't grabbing it it would have caused an accident. Ended up disabling the assistant system.
Two years ago an rental Audi A2 almost crashed me several times into the tunnel wall on the right side. It was a rainy night and sometimes when I drove into the tunnel, the car steared really hard right.
Most Americans don't travel abroad. Those that are accustomed to frequent travel for business or leisure are acutely aware of the current situation because it's already blown up their year.
Quality can be worse in a long-time cycle project:
- Engineers are motivated to slam their feature in, because if they miss the train the next one's not for 12 months.
- You get one moment per year to connect with your customers & understand how well/not well your changes worked. This means either riskier things happen or that innovation slows to a crawl.
My 2 cents, speaking from some experience working on long time cycle projects and shorter cycle projects.
Firefox has been on a 6 weeks cycle for a few years now, clearly they found that short release suited them, and that it was if anything too long. Different strokes and all that.
It's not like features go straight from master -> release in 4 weeks. Changes have to go through developer-edition and beta channel first before landing in the release channel.
Why cycles at all? Why not look at what features have been integrated, whether they make up a set that you want to release, and then release?
Neither long cycles nor short cycles make any sense. Some features take a long time to develop, some take a short time to develop. Sometimes features that take a long time to develop aren't user-facing enough to be worth releasing for, and sometimes a quick fix has a huge impact that's worth a release, like a patch for a vulnerability that's actively being exploited by a spreading malware. Features simply don't line up with a single length of time. The problem isn't long or short cycles, it's cycles.
Generally speaking, you can release based on the calendar or based on whenever you think the feature set warrants it. You are advocating the latter which works well on low traffic projects. The former is a better idea on high traffic projects where there's always something worth shipping whether it's a new translation, a bug fix, or new feature.
It depends on the project, but in larger projects the calendar approach means politics takes a back seat as no one can hold back the release if their feature hasn't been merged yet due to blocking issues. And it helps keep the change-set small, and hence lower risk, if you release more often.
Also, Firefox already has a nightly release stream. I use the Aurora stream which releases a few times a week and have almost never had an issue with this frequency. I don't think a monthly release cycle is going to be an issue.
Because regular, predictable releases mean that developers know they can always "catch the next train", and users know they can plan around predictable upgrade schedules.
> Because regular, predictable releases mean that developers know they can always "catch the next train"
This is an argument for frequent releases, not regular, predictable releases.
> users know they can plan around predictable upgrade schedules.
I'm not sure this is actually how users plan upgrades.
The majority of individuals probably never turn off the auto-update flag. Planning doesn't enter the equation.
For organizations, my guess is that most organizations will try to build their upgrade process around security, but the reality will rarely be so clean. When I worked in IT we'd get computers into our shop that hadn't been updated. Period. We'd upgrade our provisioning images when there was a notable security patch, and besides that, we just would run updates on every machine every week at 2am Sunday night: that way it didn't interfere with users, but if something went wrong, we were on it with the full team first thing Monday morning. But if machines were turned off or whatever, they wouldn't run the updates. At no point did we ever even check the release schedule of a piece of software: the updates happened on our time, and theirs was irrelevant.
I didn't work in IT for very long, though, so someone with more IT experience should correct me if I'm wrong.
Is "releasing when it's ready" basically what was done in the past for e.g. CD-distributed software?
I imagine that could work well in some cases, but it also allows corporate bureaucracy and/or marketing teams to determine when things get released at larger scales and that might not be so ideal.
Is anyone concerned that even in the face of re-training, that two aircraft could find themselves in this position within the first year of operations?
Let's say you were on one of these aircraft and the pilots were able to recover. How terrifying would that be? This is the designed behavior?
It takes zero effort in IntelliJ (pictured above).
I remember reading a suggestion about proportional fonts in a discussion on HN about code editor preferences. Switched to them several years ago and never looked back.
That triggered a flash of feeling extremely old realizing we broke ground on this codebase 20 years ago this year!
reply