That's actually a problem for the business model of mobile games. A consumer can - or very soon will be able to - pick up AI tools and cut out the middleman org churning out these illustrations, just like they cut out the professionals. It won't be too long before games are made that advertise "put your original characters in the game", and it won't be some complicated character creation tool - it'll be generative stuff.
There's a lot of "but wait, there's more" in what's happening around AI.
Bugs related to concurrency - which is where you get race conditions and deadlocks - tend to pop up wherever there's an implied sequence of dependencies to complete the computation, and the sequence is determined dynamically by an algorithm.
For example, if I have a video game where there's collision against the walls, I can understand this as potentially colliding against "multiple things simultaneously", since I'm likely to describe the scene as a composite of bounding boxes, polygons, etc.
But to get an answer for what to do in response when I contact a wall, I have to come up with an algorithm that tests all the relevant shapes or volumes.
The concurrency bug that appears when doing this in a naive way is that I test one, give an answer to that, then modify the answer when testing the others. That can lead to losing information and "popping through" a wall. And the direction in which I pop through depends on which one is tested first.
The conventional gamedev solution to that is to define down the solution set so that it no longer matters which order I test the walls in: with axis aligned boxes, I can say "move only the X axis first, then move only the Y axis". Now there is a fixed order, and a built-in bias to favor one or the other axis. But this is enough for the gameplay of your average platforming game.
The generalization on that is to describe it as a constraint optimization problem: there are some number of potential solutions, and they can be ranked relative to the "unimpeded movement" heuristic, which is usually desirable when clipping around walls. That solution set is then filtered down through the collision tests, and the top ranked one becomes the answer for that timestep.
Problems of this nature come up with resource allocation, scheduling, etc. Some kind of coordinating mechanism is needed, and OS kernels tend to shoulder a lot of the burden for this.
It's different from real-time in that real-time is a specification of what kind of performance constraint you are solving for, vs allowing any kind of performance outcome that returns acceptable concurrent answers.
The last time I tested it, it wasn't performing differently from other DVCS options when pushing large files - that is, Perforce would still win hands-down. It does have a binary diff flag, but it's probably limited on a more fundamental level by being a SQLite app under the hood.
It's a combination of things. The "Pullman Loaf" (as in Pullman railcars) became common for sandwich breads in the 19th century and established a common target for what bread should look like in the US and UK - white, sweet, fluffy, thin crust.
Later, we were successfully marketed "Wonder Bread" as a clean(as in sanitary) and convenient product, and that set another standard, one sold to housewives. It was basically applying the same strategy that built McDonald's in its early years. At the time of Wonder Bread's introduction, many adults could still recall when products were being sold "from the cracker barrel", mixed homogenously and exposed to pests. All the early packaged foods, including the canned stuff and the sliced bread, were establishing a new norm of the manufacturer guaranteeing freshness to a sell-by date. It was PB&J on sliced bread for lunch and macaroni and cheese for dinner that got a lot of the US through the Depression years.
Continental Europe was simply less interested in this genre of bread - they had competition from other varieties of grain, and different kinds of bread dishes.
> At the time of Wonder Bread's introduction, many adults could still recall when products were being sold "from the cracker barrel", mixed homogenously and exposed to pests.
I cannot remember this. (And wasn't around for the introduction of Wonder Bread.)
But I was disturbed to see recent commentary saying "why in the world does food come inside so much disposable packaging?"
That answer should be obvious to everyone. Modern food comes inside disposable packaging because the other way spreads disease. You could argue that we should have less of that, or more of it. But if you can't even imagine why the shrinkwrap is there... how do you get through the day?
Well, then just have the class communicate with the chatbot in study groups, taking turns to type, if "social" is what you're after. The way class time is currently used reflects the need to keep that grade level engaged with appropriate activities, and only as you get to the older ages does it converge on that kind of solo self-study. But it has to, eventually. You don't do research without figuring out how to do it. The flaws in leaning on LLMs are just another form of "precision vs accuracy" - GPT is always precise and not always accurate, just in the verbal domain instead of the numeric one. But we do have many tools like that in the numeric domain. The limitations can be learned.
If the LLM gives a solid three-star "fast food" education, that is actually considerably better than letting it all fall on the shoulders of the babysitters that currently serve in many classrooms.
I can vouch for Krita and Inkscape pairing well as vector art programs and even doing things that CSP, whose vector layers are pretty well liked, can't match. The issue is that drawing in Inkscape is a little bit broken(it can be done, but the current UX is death by papercut) - thus I approach it through the other program, which is basic but consistent.
So the workflow I end up using for digital inks is: Open both programs, sketch in Krita, copy-paste the vector data into Inkscape, stroke->path, then use tweak tool to sculpt the lines. This adds line weight in seconds-to-minutes. Alternately, I can apply path effects instead of stroke->path, if I want a more programmatic design. If I want to paint, I can copy-paste the shape back into Krita.
Inkscape has a lot to work on as a "content creation" app, while in terms of being true to SVG, it's done a lot of things the way you'd expect.
And I think that's most of the disconnect in the UI, which the devs have gradually gotten over by adding shadow XML data in the SVG to support Inkscape features while always presenting a rendered result.
Still, the drawing tools aren't consistent about the units they use yet. There is a lot of stuff there, and some of it is just papercuts.
I believe, having recently installed Python libraries on it, that Debian stable has the best handle on what to do to produce defaults that a mere mortal can debug: Either use what's vendored in apt, or use venv and pip for your project's additional dependencies. Pip has been configured to be venv-only, which is good for the needs of Debian's maintainers, and clarifies what your project has to be responsible for. So, while I haven't needed it yet, I have some confidence that I can get to a reproducible dev environment with the appropriate version of whatever's needed, even if the resolution process is imperfectly automated.
You could definitely argue that it would be better to build off of something like Contiki, a modern small-device OS that gained a lot of fame for its ports to old microcomputers, but is currently active in industry as well, with a new "Contiki-NG" version.
The appeal of Forth is mostly in a principled sense of, if you want Forth to be your interface, you have to build your own Forth, because Forth does as little as possible to structure you. And building your own Forth is not impractical; that's the whole point. It's wonderful as an exercise and can be productive in the right context.
But any highly developed or standardized Forth system gradually converges on being "just another platform." And once it's just another platform, it's an annoying dependency, and you end up demanding more structure for it to help tame the complexity, so the Forthiness habitually erodes.
I couldn't imagine using Forth for all of my programming, but personally I quite like the thought of underpinning a more complex system or language with a small language like a Forth or Scheme, or just something borrowing key elements. Use them to shape the basic structures at a level where they excel, and then put one or more other options on top, and interfacing with a layer far more pleasant than a bare machine.
E.g. for my experimental long-languishing Ruby compiler I bootstrapped a tiny typeless s-expression based language - not a lisp/scheme, just borrowed some syntax - to let me "escape" the warm embrace of Ruby to bootstrap the basics. It wasn't very carefully planned, and there are many things I'd like to change when I get time to play with that project again (it's been literally years, but it's not forgotten), but that aspect is part I still like, though I might like to change parts of that low-level language too.
The article is making the case for unbundling the authoring experience by bundling the browser and making it an owned environment. And that could be a win.
There's a lot of "but wait, there's more" in what's happening around AI.