HN needs a federation megathread. It gets widely debated every other week lol. Anytime Mastodon, Email, Matrix, or XMPP get brought up, the old culture war gets revived.
You don’t need to sign up to multiple servers, if you can use the ID you already have to talk with everyone on other servers. Just like with E-Mail or XMPP.
you can use a mozilla.com user id to talk to someone with a matrix.org user id, or a kde.org user id
all servers are equal participants in rooms, so a room doesn't live on a specific server aside from servers being able to create friendly names pointing to them, but nothing stops #foo:kde.org from pointing to the exact same room as #bar:mozilla.org -- both servers are participating equally and have their own shortcut name to the room
I can federate message across hosts, fine, but I can't move my username to another host, so it doesn't work like email, because with email the namespace is defined at the DNS level, and I just forward requests to whatever email host I want to use.
Being able to leave one server and join another while maintaining an identity (say, a public key for instance) is on Matrix's to do list, they haven't decided how to do it yet afaik.
With webfinger, you should be able to do exactly that. Migration of accounts is not in E-Mail, but while email has mx servers, Matrix (and most nu-fed stuff, not sure about XMPP) should have webfinger support for that.
edit: Okay, not actually webfinger. But [0] has instructions for the `/well-known/matrix/server` way. It only talks about subdomains, but it should work across domains. Possibly also with the SRV header.
Matrix is like chatting via GIT (distributed databases) and tends to be a great team messenger like Mattermost or Zulip - XMPP is structurally like email ...
You are talking about two different things. Parent was explaining what federation was. Both XMPP and Matrix are federated, just like e-mail.
And I'd say that Matrix is closer to e-mail and XMPP than you seem to assume. Once the database is synchronized, it works pretty much the same way.
Only if a missing message is detected in the graph, then a server makes another request for the messages it missed (backfill).
Moreover, Matrix and Git are quite different, since you want to be pedantic. Both the synchronization protocol, and conflict resolution are handled differently (Git does very little, Matrix is more like a CRDT in that respect).
Apparently there's an idea for a feature to enable users to transfer their account and data from one federation node to another. Pretty sure it's in the "just an idea" phase, so don't hold your breath.
Well, with email you can a) forward from one address to another b) set up an auto-reply (I've moved), c) copy your message history via imap/maildir/mbox-files. Ditto for address book(s).
>Git focuses on individual branches, because that is exactly what you want for a highly-distributed bazaar-style project such as Linux. Linus Torvalds does not want to see every check-in by every contributor to Linux: such extreme visibility does not scale well. Contrast Fossil, which was written for the cathedral-style SQLite project and its handful of active committers
Ugh...so after all that fanfare of how amazing fossil is, the author admits it just doesn't scale well. I was wondering how that "see it all" approach worked for a busy repo, and now I know, it just doesn't.
I love SQLite, and glad the primary author is happy working on it with fossil. I think if he published it's largest downfall first, it might gain more adoption to folks that could actually use that feature.
Personally I see "doesn't scale well" as a feature, if I'm evaluating something to use on a personal or small project. Tools with clear limits on what they're not for usually have a more polished experience for what they are for.
In that case you might really love "...SCons, a popular Python-based build tool. In a nutshell, my experiments showed that SCons exhibits roughly quadratic growth in build runtimes as the number of targets increases" [0]
I think [0] indicates that it does scale quite well, you can even test by comparing the 2. You can go from cloning to pushing on a repo in half the time. There's branches as well, so while the /timeline[1] seems too busy, you can always drop back to a forum or the bug tracker, which it has already.
time fossil clone https://fossil-scm.org/
time git clone https://github.com/drhsqlite/fossil-mirror
My results were
fossil
time > real 5m43.120s user 2m56.532s sys 0m48.246s
ls -lh > 60M fossil-scm.fossil
git
time > real 10m35.043s user 5m58.427s sys 1m10.555s
du -sh fossil-mirror/.git/ > 829M
It's also significantly faster to clone after the repack:
time git clone --mirror https://github.com/drhsqlite/fossil-mirror
65.10s user 21.22s system 35% cpu 4:05.92 total
time git clone --no-local fossil-mirror fossil-no-repack
26.92s user 2.99s system 155% cpu 19.190 total
git -C fossil-mirror.git repack -a -f -d
time git clone --no-local fossil-mirror fossil-repack
5.42s user 1.18s system 211% cpu 3.121 total
Edit: took new measurements with --mirror on the first clone so that the "local" clones actually get all the branches.
I imagine fossils internal git export function is not robust. Even when you {fossil git export} the repo, you still get a much bigger .git than fossil (20mb more), with errors on some checkouts. Repacking doesn't do much either in this case, so the functionality changes.
Plenty of tech companies have massive monorepos which large numbers of people contribute to. I don’t know how easy it is for one person to see another’s branch though. So: maybe it can scale, at least somewhat, with a more centralised system.
I have a love hate relationship with Git. It is current used for a lot of purposes it was never designed for by Torvald. It is time for something better, unfortunately Fossil isn't likely going to be a Git successor.
> Although DevEx is complex and nuanced, teams and organizations can take steps toward improvement by focusing on these three key areas.
I feel like they are aware that not everything fits into the framework perfectly, but if your organization was trying to improve DevEx, this framework is a place to start.
Uhhh..probably you and .2% of hiring managers. I've never cared when hiring, and have never heard this advice anywhere but this post. No offense, but if it was as common as 10%, recruiters and bloggers would be suggesting it left and right.
people are saying this is "fake" because the overall motions were programmed ahead of time.
However, picking stuff up perfectly, placing it, and jumping, just using sensors - which is what this is really illustrating - is not that fake to me.
I could imagine this robot being used by airlines to move baggage. A confined set of pre-programmed movements with the only variable being the luggage on a cart, and a human supervisor. It reduced the back-breaking task of moving luggage to a just a normal "stand there and press buttons" type of job. Easier to higher for, maintain that crew member, and less injuries to baggage handlers.
Giving it a narrow set of pre-programmed movements defeats the entire purpose of building a human shaped robot. The goal is for it to be general purpose. If they wanted to perform a single discrete task they could design a way more efficient machine for doing that task.
The complexity involved - across hardware, sensors, and software - is mind boggling, even for this “dumb”, pre-programmed robot. So to me, it makes complete sense to build towards the ultimate goal in a gradual manner.
You need to build basic skill primitives for the robot until it's no longer a mechanical problem, and just a software problem, which can be iterated on much more quickly and take advantage of mounds of ML/AI advances. Seems like they are getting close.
How else would you iterate towards a more general purpose robot? You have to start with narrow tasks. Humans are general purpose but we don't teach them to drive cars before they can walk.
But you could upload a new program of movements on the same hardware.
Requiring bespoke hardware for every possible task is like saying we shouldn't have CPUs - if you're gunna write a program, might as well put it on an FPGA.
Boston dynamics also published a making of video. There it is explained why the luggage scenario is still far away: The robot has perfect knowledge of the object it is transporting (size and weight distribution). In order to be able to transport any object, the control algorithms need to be able to estimate themselves where to grip an object and what the objects weight distribution is.
It just needs to be fluent in moving in physical space.
Tesla made a "language model" by converting traffic intersections to a language and running gpt-3 stuff
It’s “programmed” as in you tell a kid to go there and grab that thing and come back.
Actually executing those motions have a ridiculous amount of complexity, hell, standing in one place and not falling over is far from trivial in case of such a robot.
I think Atlas could see mainstream adoption without the high reliability that's expected from Autopilot. Atlas still needs to reliably not kill people, but this is seemingly much easier to do when you're not hurtling down the road at 70 miles per hour. Beyond that, it could be much less reliable than humans and still add tremendous value.
Human drivers crash once per day? Sorry, there is no comparison. The only reason you don't see FSD crashes is because FSD testers so far have been very attentive (they are better drivers than the general public because they are enthusiasts that bought a luxury car).
But if FSD was unsupervised it would be a killing machine.
I am sure if you counted for example Skoda Octavia crashes it'd be way more than once a day.
Let's talk about "adaptive cruise control" too - I am not aware of any company other than Tesla having any sort of working crash protection built into that feature. My old Ford Mondeo just makes a loud noise and brakes a little, but you're going to crash anyways, it's not going to move the steering wheel an inch.
Volvo cars with "lane assist" will consistently turn you into the ditch - because it greatly overcompensates when you get near the line, which can be deadly above 100 km/h. And it's pretty hard to turn off that feature while driving and it turns on automatically when you power on the car.
Why do people keep hating on the provably most safe car (Tesla Model S) instead of mentioning all the other cars with terrible unsafe assistants? Roads would be much safer place if everyone had a Tesla.
I've never crashed my car driving in downtown Seattle. But FSD cannot handle downtown Seattle or downtown Chicago from the videos seen on YouTube. The crashes would be at least one per day. It's even unacceptable for a system such as this to crash the car once per year when it's a $40,000+ purchase.
There are lots of ultra-specific news publications that require subscriptions. It's usually not very cheap either, but if they can gather 100 annual subscriptions at $2000 a piece it's enough to pay for the salary of at least one full-time journalist and one full-time editor, as well as any design overhead. It also allows them to maintain full independence.
People who usually subscribe to these things are researchers, industry-related businesses, other journalists or traders looking for an edge. The journalists writing the content are also researchers that have a direct, exclusive interest in the content so it turns out fairly high quality. Here[0] is another in a related industry: the Iraqi oil markets. With an annual subscription cost of $2301.
In the grocery retail business, the trade press is free if you work for a retailer, and very expensive as a wholesaler. The subscription also comes with listing in a registry, moving the expense to the marketing budget.