I find it a little funny that their blog post goes into a lot of righteous detail on why they don't charge for address space like other providers and then on their pricing page they charge for you to receive or send emails.
Receive! Like the moment someone knows you are using them as a provider they would be able to DoS your account as no other provider charges this way.
I'm so mad about Slack removing custom themes as part of this update. Seems entirely pointless given the volume of themes they now provide, it seems arbitrary that they have limited your own choices.
This is highly reminiscent of the numbers game from the British daytime gameshow Countdown.
Contestants are given a choice of six numbers, from which they can choose how many big (25, 50, 75, 100) and small (1-10), and are then given a random three digit number. Closest within 30 seconds using standard operations wins.
I dropped pinboard.in recently. The interface hasn't had improvements in years, the extensions are all third party, and the API if you wanted to build your own is pretty limiting. The mobile interface is pretty poor too.
I'm now moved over the Raindrop.io[1], which is another solo-developer outfit, but has had a lot of work put into it. It does all the same stuff Pinboard does (including page archiving but beside the social and public directory things... which nobody uses), but has a bunch of additional features. It has a much more complete API, a well maintained extension, and mobile apps! Definitely worth giving a go.
I've been paying for Raindrop for a few years and its organisation capabilities are really good: tags, collections, folders, search, etc. All in a quite polished UI!
Agreed it's much better service, I've been really enjoying it.
One feature I recently learned was Highlights [1]
You can select a passage of text on the page, then when bookmarked, it'll save the selected text. Allows for multiple highlights. And then visiting the page in the future those texts clips will then be highlighted again.
I used to work for a major multinational media company.
A few years ago the CTO at that media company looked to slash costs. They have a bunch of newspapers all over the world, and they had just acquired an Indian software development agency with experience in WordPress.
You can see where this is going.
Soon, a programme to launch WordPress globally to every publisher as their CMS.
But of course you would be allowed some flexibility. You could bring your own CDN if needed, you could bring your own frontend to read off the API, you could harness some identity management system for login, you could even build your own content store.
At that point "using WordPress" just became modifying Gutenberg. I feel sorry for the staff still there who have to constantly fight against Automattic's clear direction in building a Squarespace competitor, to build a tightly locked down and limited editor against the frustrations of the rest of the WordPress ecosystem.
WordPress is great, if you realise what WordPress wants to be, and don't fight that.
Yeah, sorry, that is what the world is like. It's a shame, but there's a way (as written in the post) that Apple could have accomplished something similar whilst still providing enough data to prevent the advertising arms race. They chose not to, therefore the industrial advertising complex will start manufacturing their arms.
And I'm sorry, but you are entirely wrong on your second point. Small publishers and newsletters are a growing business in media, many of them free (in order to build an audience from nothing). Blocking their opens from being recorded on what could be over 50% of recipients is going to cause them problems. They won't be able to measure their growth as effectively, they won't be able to get sponsorships as easily, and they won't be able to as easily judge as to when to go paid and ad-free.
Marketing newsletters are very different from the editorial newsletters that I'm generally talking about here. It's a shame they have to use the same technology, but the unsubscribe pruning is reader friendly, and doesn't prevent a smooth and reliable manual unsubscribe process.
This was part of the exciting suggestion of the original Stadia trailers: with the entire game and all the interactions happening in Google datacentres, you would only need to receive a rendered version, and suddenly all this would be possible.
Of course none of that has happened and Google folded their games studio, which is par for them, but does go some way to reinforcing this point that there's a reason this stuff isn't done.
Isn't that how multiplayer games work today? You receive a rendering of the server's source of truth, and send actions you'd like to perform. The current implementation just sends a lot less data to the client by sending deltas and positional data rather than full screen renderings right? It seems like that wouldn't fix any of the issues above, and might just introduce the complexity of streaming video to all the clients, and replicating world state to all the video streaming servers.
With Stadia et al. your computer sends keystrokes and mouse movements to Google's servers, and the server returns a video feed that's then displayed on your screen. Today's multiplayer games send and receive movement information and such, but the graphics rendering, which I believe is the main issue here, is still done client-side.
I don’t think graphics rendering is really the problem — worst comes to worst, you just reduce graphical fidelity — and regardless, rendering 300 AI characters roaming around is clearly doable, and it’d be the same with viewing real players. It’s a client-side problem, and doesn’t have to really scale much with player count (Stadia’s main pitch was that you didn’t need high end PCs to play AAA games — not that it would enhance games’ ability to play online)
The hard part is dealing with 300 players communicating simultaneously with random delays and state changes, in an environment that doesn’t really handle delays very well (a text chat can be 10s late and no one cares. A character moving can only be a frames late before interpolation becomes obvious, and you start teleporting people around). And tracking the state changes across users and passing them around tends to add up, causing further delays.
Of course, you could always change the game model itself too. RuneScape & Maplestory trivially enabled large groups of players simultaneously (pretty sure you could easily find places with 200+ players visible and active). Runescape was basically turn-based and ran like 1 turn a second, so much larger room for delays. Maplestory capped actual active play to relatively few players, but enabled large quantities in towns acting as glorified visual chat rooms — which solved 90% of the feeling of being in a large community. The Maplestory strategy I think nearly every MMO implements.
> Runescape was basically turn-based and ran like 1 turn a second
actually, runescape still has 0.6 sec ticks. it has become part of the meta-game to input commands at exactly 0.6 second intervals for optimal efficiency, and is basically mandatory for high-level pvp combat.
> (the run trick) works by first standing on a square before a trap, and then running across the single square that has a trap, in order to end up on the other side. Because running causes the player to move over two squares in a single tick, this causes the game to never consider the player to be on the trapped square.
This is like Programming 101 for player position updates, to avoid them running or jumping through walls. A classical novice mistake. I'm surprised to see it in such a mature game.
That's not necessarily the hard part. I mean, it's hard, but it's not the hard part.
We know this because what this game is trying to do is done already a long time ago. It's a description of the original vision for Second Life. In fact Second Life was explicitly an attempt to create "the metaverse", with all user-generated content. Linden Labs used to write about the challenges they faced in making that vision real so we have a good idea of where the challenges really lie, or at least did 10-15 years ago.
First problem: physics. LL sharded their world, which was indeed infinite. The problem is each shard required its own dedicated high end server, which made "land" extremely expensive. Amazingly, people bought it anyway, LL developed a small but extremely rich customer base who were willing to literally rent high end (what would be today) cloud VMs just to have a very small space to call their own online. For a brief period it was also trendy for companies to open up spaces in Second Life.
The reason land was so expensive was that physics (collision detection and movement, mostly) required the server to constantly iterate every object within the zone and the calculations scaled with number of objects. Shutting down physics when nobody was there also wasn't possible because the metaverse concept implied allowing arbitrary scripting, and scripts frequently expected the world to keep running even if nobody was there. The answer in SL to "if a tree falls and nobody is around to hear it, does it make a sound" was a resounding yes.
The second big problem they faced was that user generated 3D content was extremely un-optimized. The biggest complaint users had about SL was always performance. Eventually they gave up promising to improve it and came clean with the userbase: SL was slow and always would be because users kept making slow content. In particular SL was set mostly outdoors, and even when indoors, people loved doing things like creating semi-transparent windows. The ability for any object to change at any moment (users created content in-game) also implied they couldn't use all the "baking" techniques professional games use to optimize rendering. So the rendering algorithms had to be very primitive, and there was lots of overdraw, so SL really chugged even when drawing scenes that looked very basic compared to what high end games could do. And of course hobbyist metaverse users are not pro grade 3D artists so they tended to make a lot of ugly stuff anyway.
So syncing player data was certainly a sub-challenge, but compared to the difficulties posed by cost effective sharded physics and totally un-optimizable 3D scenes, it wasn't that big a deal.
There are tons of examples of physics engines running on the GPU. Contact resolution (particularly between simple shapes) is an embarassingly parallel problem. I don't think it would be impossible to handle millions of collisions per frame on a modern GPU, maybe even CPU. Also, what counted as a high-end server a decade ago probably is much more affordable with today's technology. You can also probably design a scripting system/set of game rules which allow unobserved NPCs/objects to sleep when no player is around them.
User content can probably be limited as well so that people don't upload overly complex/poorly performing 3d assets.
I'm not saying that this isn't a tall order, but I think it's far more manageable by a sufficiently determined and talented team than it was decades ago.
Sure, technology makes some things easier with time. But the needs for content to be optimized has not really gone away, and the issue was not necessarily overly complex 3D assets but that the very structure of what people wanted to build was difficult to optimize for. Many AAA games are set in indoor areas with no windows, because this limits how much you have to draw. In second life, almost everywhere the camera could see long distances either because you are outside or because you're inside in a room with windows. And that doesn't seem to have a really easy technological solution, although maybe Unreal's new nanite can do something about it by avoiding the need for custom LOD meshes.
The delays, interpolation, etc. are a big problem in games like FPS, but I fail to see why they'd be a large problem in an MMO.
Why can't you just send the new position of other visible players, and then client-side play those characters' walking animations to the new position from their current (client-side) position? Unless the accuracy of another player's targeting is relevant why is there a problem with their position always being delayed by, say, a second?
A good MMO typically isn't just a large number of players walking around and not interacting with one another.
Look at Eve Online; you have a spaceship, you travel around and shoot stuff (this is simplifying it, but that's a big part of it). Sometimes there are 100 or 1000 ships all involved in one battle; what do you have to deal with in that case? You need to send all ship data (model, loadout, customizations) to every client so they can load in the right assets. You need to track where each ship is (in 3d) and where it's going at any given time. You need to track what weapons are firing, and who they might hit (there are missiles, interception missiles, "bombs" with an explosion radius, lasers that hitscan, guns that have tracking projectiles, etc). You need to compute status effects that may change based on distance between given ships. You need to send all this data to each client regularly (every second at least).
This really adds up, both in compute and memory on the server and in amount of data that needs to be sent to each client. It gets difficult quickly. Let's say all the info above can be described in 1kb/ship/tick - you get up to the larger battles, which in Eve have hit 5000+ (rarely, yes) - and you're dealing with 5000kb/tick going to every one of those clients.
> A good MMO typically isn't just a large number of players walking around and not interacting with one another.
It is, though! In most games you're only actually interacting with one or a few people at a time - even if many more might be visible.
Part of it, of course, is that the existing MMOs are designed for small-group gameplay. But even outside MMOs - Minecraft, CoD, etc. - you're not ever going to interact with hundreds of players. A human can't manage that!
Eve runs into problems because it has large-scale PvP. That says almost nothing about PvE scenarios.
Eve online has a lot of self inflicted problems, like relaying on Python for speed sensitive code, running single threaded servers, allocating one CPU per solar system (small region around a star) instead of per grid, expanding Grids to ridiculous sizes instead of partitioning them into small microgrids with delayed group updates.
You really dont need to be send accurate per server tick information about a rocket hitting a ship 5000km away from you.
As I said, the game model changes things. One of the main issues in these discussions is that no one posts what they’re actually imagining of the MMO to be doing — and we end up talking past each other.
Anyways, with an FPS or Action RPG (anything where you can act/react quickly, then you expect your peers to do the same) low state/latency is absolutely vital.
With WoW style games you could probably get away with a LOT of interpolation before it becomes a problem (it’ll quickly become obvious as people get unnaturally tweened across the screen in straight lines, but that’s ok) largely because there’s really not much for you to be doing off-cool down anyways.
The main problem is that like RuneScape/Maplestory, in WoW supporting 200 players in the same spot doesn’t really do you anything (except for those group fights, where latency becomes important again) — towns are just glorified chat rooms, because what else are you going to do? You’re not pikmin requiring x/200 to proceed
VR chat it literally accepts that chat room aspect, and so it’s more than acceptable — it’s part of the fun
That is, you can change the gameplay to better support the issues relating to large groups. The bigger problem really is do you even want 20000 people in the same session? There’s not much to do in a crowd — in a real-world crowd you basically lose all autonomy and the crowd itself becomes the unit of autonomy with “a mind of its own”. Ultimately it’s a dumb goal to have in a non-competitive setting.
What you really want is 20000 people able to affect and manipulate the same “world” — the way we do in reality. We don’t look for everyone being there in front of you — we realize things have been changed by other people when we weren’t looking. You want object persistence and manipulation with a consistent world state, shared by thousands. Which is a much different problem — you want, really, a proper simulation.
we try to send something as close as just keystrokes in games too, to prevent cheating among other reasons. sometimes we have to send actual state though
Yes. Typically you are seeing an interpolations between the last agreed source of truth - 1 time interval and the last agreed source of truth. The idea is that all inputs will be received for +1 time interval before you get to the actual last agreed on source of truth (which becomes the last agreed source of truth -1 time interval ).
In modern multiplayer games "server" is just a thin layer passing packets around with zero validation, then you have a rootkit installed on clients trying to sniff for known cheat signatures, again not doing any packet validation.
No true server source of truth, no common sense packet parsing, just sand castles and cheats allowing you to fly around and spawn items.
Stadia isn't magic though, it just runs on whatever random GPU they have lying around. It was never going to be better than what you can build yourself.
Receive! Like the moment someone knows you are using them as a provider they would be able to DoS your account as no other provider charges this way.