Not a system programmer -- at this point, does C hold any significant advantage over Rust? Is it inevitable that everything written in C is going to be gradually converted to safer languages?
C currently remains the language of system ABIs, and there remains functionality that C can express that Rust cannot (principally bitfields).
Furthermore, in terms of extensions to the language to support more obtuse architecture, Rust has made a couple of decisions that make it hard for some of those architectures to be supported well. For example, Rust has decided that the array index type, the object size type, and the pointer size type are all the same type, which is not the case for a couple of architectures; it's also the case that things like segmented pointers don't really work in Rust (of course, they barely work in C, but barely is more than nothing).
Can you expand on bitfields? There’s crates that implement bitfield structs via macros so while not being baked into the language I’m not sure what in practice Rust isn’t able to do on that front.
I'm not a rust or systems programmer but I think it meant that as an ABI or foreign function interface bitfields are not stable or not intuitive to use, as they can't be declared granularily enough.
That first sentence though. Bitfields and ABI alongside each other.
Bitfield packing rules get pretty wild. Sure the user facing API in the language is convenient, but the ABI it produces is terrible (particularly in evolution).
I would like a revision to bitfields and structs to make them behave the way a programmer things, with the compiler free to suggest changes which optimize the layout. As well as some flag that indicates the compiler should not, it's a finalized structure.
I'm genuinely surprised that usize <=> pointer convertibility exists. Even Go has different types for pointer-width integers (uintptr) and sizes of things (int/uint). I can only guess that Rust's choice was seen as a harmless simplification at the time. Is it something that can be fixed with editions? My guess is no, or at least not easily.
There is a cost to having multiple language-level types that represent the exact same set of values, as C has (and is really noticeable in C++). Rust made an early, fairly explicit decision that a) usize is a distinct fundamental type from the other types, and not merely a target-specific typedef, and b) not to introduce more types for things like uindex or uaddr or uptr, which are the same as usize on nearly every platform.
Rust worded in its initial guarantee that usize was sufficient to roundtrip a pointer (making it effectively uptr), and there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken. Sort of the more fundamental problem is that many crates are perfectly happy opting out of compiling for weirder platform--I've designed some stuff that relies on 64-bit system properties, and I'd rather like to have the ability to say "no compile for you on platform where usize-is-not-u64" and get impl From<usize> for u64 and impl From<u64> for usize. If you've got something like that, it also provides a neat way to say "I don't want to opt out of [or into] compiling for usize≠uptr" and keeping backwards compatibility.
> Is it something that can be fixed with editions? My guess is no, or at least not easily.
Assuming I'm reading these blog posts [0, 1] correctly, it seems that the size_of::<usize>() == size_of::<*mut u8>() assumption is changeable across editions.
Or at the very least, if that change (or a similarly workable one) isn't possible, both blog posts do a pretty good job of pointedly not saying so.
In what architecture are those types different? Is there a good reason for it there architecturally, or is it just a toolchain idiosyncrasy in terms of how it's exposed (like LP64 vs. LLP64 etc.)?
CHERI has 64-bit object size but 128-bit pointers (because the pointer values also carry pointer provenance metadata in addition to an address). I know some of the pointer types on GPUs (e.g., texture pointers) also have wildly different sizes for the address size versus the pointer size. Far pointers on segmented i386 would be 16-bit object and index size but 32-bit address and pointer size.
There was one accelerator architecture we were working that discussed making the entire datapath be 32-bit (taking less space) and having a 32-bit index type with a 64-bit pointer size, but this was eventually rejected as too hard to get working.
There are certain styles of programming and data structure implementations that end up requiring you to fight Rust at almost every step. Things like intrusive data structures, pointer manipulation and so on. Famously there is an entire book online on how to write a performant linked list in idiomatic Rust - something that is considered straightforward in C.
For these cases you could always use Zig instead of C
Every system under the Sun has a C compiler. This isn't remotely true for Rust. Rust is more modern than C, but has it's own issues, among others very slow compilation times. My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
There is a set of languages which are essentially required to be available on any viable system. At present, these are probably C, C++, Perl, Python, Java, and Bash (with a degree of asterisks on the last two). Rust I don't think has made it through that door yet, but on current trends, it's at the threshold and will almost certainly step through. Leaving this set of mandatory languages is difficult (I think Fortran, and BASIC-with-an-asterisk, are the only languages to really have done so), and Perl is the only one I would risk money on departing in my lifetime.
I do firmly expect that we're less than a decade out from seeing some reference algorithm be implemented in Rust rather than C, probably a cryptographic algorithm or a media codec. Although you might argue that the egg library for e-graphs already qualifies.
We're already at the point where in order to have a "decent" desktop software experience you _need_ Rust too. For instance, Rust doesn't support some niche architectures because LLVM doesn't support them (those architectures are now exceedingly rare) and this means no Firefox for instance.
IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.
Rust is different because it both:
- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.
- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.
That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).
Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.
Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??
Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.
So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).
In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)
Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).
So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).
I can’t think of many real world production systems which don’t have a rust target. Also I’m hopeful the GCC backend for rustc makes some progress and can become an option for the more esoteric ones
There aren't really any "systems programming" platforms anywhere near production that doesn't have a workable rust target.
It's "embedded programming" where you often start to run into weird platforms (or sub-platforms) that only have a c compiler, or the rust compiler that does exist is somewhat borderline. We are sometimes talking about devices which don't even have a gcc port (or the port is based on a very old version of gcc).
Which is a shame, because IMO, rust actually excels as an embedded programming language.
Linux is a bit marginal, as it crosses the boundary and is often used as a kernel for embedded devices (especially ones that need to do networking). The 68k people have been hit quite hard by this, linux on 68k is still a semi-common usecase, and while there is a prototype rust back end, it's still not production ready.
There’s also I believe an effort to target C as the mid level although I don’t know the state / how well it’ll work in an embedded space anyway where performance really matters and these compilers have super old optimizers that haven’t been updated in 3 decades.
It's mostly embedded / microcontroller stuff. Things that you would use something like SDCC or a vendor toolchain for. Things like the 8051, stm8, PIC or oddball things like the 4 cent Padauk micros everyone was raving about a few years ago. 8051 especially still seems to come up from time to time in things like the ch554 usb controller, or some NRF 2.4ghz wireless chips.
Those don’t really support C in any real stretch, talking about general experience with microcontrollers and closed vendor toolchains; it’s a frozen dialect of C from decades ago which isn’t what people think of when they say C (usually people mean at least the 26 year old C99 standard but these often at best support C89 or even come with their own limitations)
That isn't always the case. Slow compilations are usually because of procedural macros and/or heavy use of generics. And even then compile times are often comparable to languages like typescript and scala.
> Every system under the Sun has a C compiler... My guess is that C will be around long after people will have moved on from Rust to another newfangled alternative.
This is still the worst possible argument for C. If C persists in places no one uses, then who cares?
You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM. The embedded space is still incredibly fragmented.
That said, only a handful of those architectures are actually so weird that they would be hard to write a LLVM backend for. I understand why the project hasn’t established a stable backend plugin API, but it would help support these ancillary architectures that nobody wants to have to actively maintain as part of the LLVM project. Right now, you usually need to use a fork of the whole LLVM project when using experimental backends.
> You almost certainly have a bunch of devices containing a microcontroller that runs an architecture not targeted by LLVM.
This is exactly what I'm saying. Do you think HW drives SW or the other way around? When Rust is in the Linux kernel, my guess is it will be very hard to find new HW worth using, which doesn't have some Rust support.
rustc can use a few different backends. By my understanding, the LLVM backend is fully supported, the Cranelift backend is either fully supported or nearly so, and there's a GCC backend in the works. In addition, there's a separate project to create an independent Rust frontend as part of GCC.
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
Before we ask if almost all things old will be rewritten in Rust, we should ask if almost all things new are being written in Rust or other memory-safe languages?
Obviously not. When will that happen? 15 years? Maybe it's generational: How long before developers 'born' into to memory-safe languages as serious choices will be substantially in charge of software development?
I don't know I tend to either come across new tools written in Rust, JavaScript or Python but relatively low amount of C. The times I see some "cargo install xyz" in a git repo of some new tool is definitely noticeable.
New high-scale data infrastructure projects I am aware of mostly seem to be C++ (often C++20). A bit of Rust, which I’ve used, and Zig but most of the hardcore stuff is still done in C++ and will be for the foreseeable future.
It is easy to forget that the state-of-the-art implementations of a lot of systems software is not open source. They don’t struggle to attract contributors because of language choices, being on the bleeding edge of computer science is selling point enough.
> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free
which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".
I'm a bit wary if this is hiding an agist sentiment, though. I doubt most Rust developers were 'born into' the language, but instead adopted it on top of existing experience in other languages.
People can learn Rust at any age. The reality is that experienced people often are more hesitant to learn new things.
I can think of possible reasons: Early in life, in school and early career, much of what you work on is inevitably new to you, and also authorities (professor, boss) compel you to learn whatever they choose. You become accustomed to and skilled at adapting new things. Later, when you have power to make the choice, you are less likely to make yourself change (and more likely to make the junior people change, when there's a trade-off). Power corrupts, even on that small scale.
There's also a good argument for being stubborn and jaded: You have 30 years perfecting the skills, tools, efficiencies, etc. of C++. For the new project, even if C++ isn't as good a fit as Rust, are you going to be more efficient using Rust? How about in a year? Two years? ... It might not be worth learning Rust at all; ROI might be higher continuing to invest in additional elite C++ skills. Certainly that has more appeal to someone who knows C++ intimately - continue to refine this beautiful machine, or bang your head against the wall?
For someone without that investment, Rust might have higher ROI; that's fine, let them learn it. We still need C++ developers. Morbid but true, to a degree: 'Progress happens one funeral at a time.'
I still think you're off the mark. Again, most existing Rust developers are not "blank slate Rust developers". That they do not rush out to rewrite all of their past projects in C++ may be more about sunk costs, and wanting to solve new problems with from-scratch development.
> most existing Rust developers are not "blank slate Rust developers"
Not most, but the pool of software devs has been doubling every five years, and Rust matches C# on "Learning to Code" votes at Stack Overflow's last survey, which is crazy considering how many people learn C# just to use Unity. I think you underestimate how many developers are Rust blank slates.
Anecdotically, I've recently come across comments from people who've taught themselves Rust but not C or C++.
It’s okay to enjoy driving an outdated and dangerous car for the thrill because it makes pleasing noise, as long as you don’t annoy too much other people with it.
Apple handled this problem by adding memory safety to C (Firebloom). It seems unlikely they would throw away that investment and move to Rust. I’m sure lots of other companies don’t want to throw away their existing code, and when they write new code there will always be a desire to draw on prior art.
That's a rather pessimistic take compared to what's actually happening. What you say should apply to the big players like Amazon, Google, Microsoft, etc the most, because they arguably have massive C codebases. Yet, they're also some of the most enthusiastic adopters and promoters of Rust. A lot of other adopters also have legacy C codebases.
I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.
C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.
People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.
It’s available on more obscure platforms than Rust, and more people are familiar with it.
I wouldn’t say it’s inevitable that everything will be rewritten in Rust, at the very least this will this decades. C has been with us for more than half a century and is the foundation of pretty much everything, it will take a long time to migrate all that.
More likely is that they will live next to each other for a very, very long time.
you can look at rust sources of real system programs like framekernel or such things. uefi-rs etc.
there u can likely explore well the boundaries where rust does and does not work.
people have all kind of opinions. mine is this:
if you need unsafe peppered around, the only thing rust offers is being very unergonomic. its hard to write and hard to debug for no reason. Writing memory-safe C code is easy. The problems rust solves arent bad, just solved in a way thats way more complicated than writing same (safe) C code.
a language is not unsafe. you can write perfectly shit code in rust. and you can write perfectly safe code in C.
people need to stop calling a language safe and then reimplementing other peoples hard work in a bad way creating whole new vulnerabilities.
I disagree. Rust shines when you need perform "unsafe" operations. It forces programmers to be explicit and isolate their use of unsafe memory operations. This makes it significantly more feasible to keep track of invariants.
It is completely besides the point that you can also write "shit code" in Rust. Just because you are fed up with the "reimplement the world in Rust" culture does not mean that the tool itself is bad.
Rust still compiles into bigger binary sizes than C, by a small amount. Although it’s such a complex thing depending on your code that it really depends case-by-case, and you can get pretty close. On embedded systems with small amounts of ram (think on the order of 64kbytes), a few extra kb still hurts a lot.
It's a bit like asking if there is any significant advantage to ICE motors over electric motors. They both have advantages and disadvantages. Every person who uses one or the other, will tell you about their own use case, and why nobody could possibly need to use the alternative.
There's already applications out there for the "old thing" that need to be maintained, and they're way too old for anyone to bother with re-creating it with the "new thing". And the "old thing" has some advantages the "new thing" doesn't have. So some very specific applications will keep using the "old thing". Other applications will use the "new thing" as it is convenient.
To answer your second question, nothing is inevitable, except death, taxes, and the obsolescence of machines. Rust is the new kid on the block now, but in 20 years, everybody will be rewriting all the Rust software in something else (if we even have source code in the future; anyone you know read machine code or punch cards?). C'est la vie.
A lot of C's popularity is with how standard and simple it is. I doubt Rust will be the safe language of the future, simply because of its complexity. The true future of "safe" software is already here, JavaScript.
There will be small niches leftover:
* Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
* OS / Kernel - Nearly all of the relevant code is unsafe. There aren't many real benefits. It will happen anyways due to grant funding requirements. This will take decades, maybe a century. A better alternative would be a verified kernel with formal methods and a Linux compatibility layer, but that is pie in the sky.
* Game Engines - Rust screwed up its standard library by not putting custom allocation at the center of it. Until we get a Rust version of the EASTL, adoption will be slow at best.
* High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
* Browsers - Despite being born in a browser, Rust is unlikely to make any inroads. Mozilla lost their ability to make effective change and already killed their Rust project once. Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
* High-Throughput Services - This is where I see the bulk of Rust adoption. I would be surprised if major rewrites aren't already underway.
This isn't really true; otherwise, there would be no reason for no_std to exist. Data race safety is independent of whether you allocate or not, lifetimes can be handy even for fixed-size arenas, you still get bounds checks, you still get other niceties like sum types/an expressive type system, etc.
> OS / Kernel - Nearly all of the relevant code is unsafe.
I think that characterization is rather exaggerated. IIRC the proportion of unsafe code in Redox OS is somewhere around 10%, and Steve Klabnik said that Oxide's Hubris has a similarly small proportion of unsafe code (~3% as of a year or two ago) [0]
> Browsers - Despite being born in a browser, Rust is unlikely to make any inroads.
Technically speaking, Rust already has. There has been Rust in Firefox for quite a while now, and Chromium has started allowing Rust for third-party components.
> Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
Google is transitioning large parts of Android to Rust and there is now first-party code in Chromium and V8 in Rust. I’m sure they’ll continue to write new C++ code for a good while, but they’ve made substantial investments to enable using Rust in these projects going forward.
Also, if you’re imagining the board of a multi-trillion dollar market cap company is making direct decisions about what languages get used, you may want to check what else in this list you are imagining.
There are memory safety issues that literally only apply to memory on the stack, like returning dangling pointers to local variables. Not touching the heap doesn't magically avoid all of the potential issues in C.
Rust is already making substantial inroads in browsers, especially for things like codecs. Chrome also recently replaced FreeType with Skrifa (Rust), and the JS Temporal API in V8 is implemented in Rust.
> Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
Modern embedded isn't your grandpa's embedded anymore. Modern embedded chips have multiple KiB of ram, some even above 1MiB and have been like that for almost a decade (look at ESP32 for instance). I once worked on embedded projects based on ESP32 that used full C++, with allocators, exceptions, ... using SPI RAM and worked great. There's a fantastic port of ESP-IDF on Rust that Espressif themselves is maintaining nowadays, too.
> Rust is also too complex for smaller systems to write compilers.
I am not a compiler engineer, but I want to tease apart this statement. As I understand, the main Rust compiler uses LLVM framework which uses an intermediate language that is somewhat like platform independent assembly code. As long as you can write a lexer/parser to generate the intermediate language, there will be a separate backend to generate machine code from the intermediate language. In my (non-compiler-engineer) mind, separates the concern of front-end language (Rust) from target platform (embedded). Do you agree? Or do I misunderstand?
Memory safety applies to all memory. Not just heap allocated memory.
This is a strange claim because it's so obviously false. Was this comment supposed to be satire and I just missed it?
Anyway, Rust has benefits beyond memory safety.
> Rust is also too complex for smaller systems to write compilers.
Rust uses LLVM as the compiler backend.
There are already a lot of embedded targets for Rust and a growing number of libraries. Some vendors have started adopting it with first-class support. Again, it's weird to make this claim.
> Nearly all of the relevant code is unsafe. There aren't many real benefits.
Unsafe sections do not make the entire usage of Rust unsafe. That's a common misconception from people who don't know much about Rust, but it's not like the unsafe keyword instantly obliterates any Rust advantages, or even all of its safety guarantees.
It's also strange to see this claim under an article about the kernel developers choosing to move forward with Rust.
> High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
C++ and VHDL aren't interchangeable. They serve different purposes for different layers of the system. They aren't moving everything to FPGAs.
Betting on a garbage collected language is strange. Tail latencies matter a lot.
This entire comment is so weird and misinformed that I had to re-read it to make sure it wasn't satire or something.
> Memory safety applies to all memory. Not just heap allocated memory.
> Anyway, Rust has benefits beyond memory safety.
I want to elaborate on this a little bit. Rust uses some basic patterns to ensure memory safety. They are 1. RAII, 2. the move semantics, and 3. the borrow validation semantics.
This combination however, is useful for compile-time-verified management of any 'resource', not just heap memory. Think of 'resources' as something unique and useful, that you acquire when you need it and release/free it when you're done.
For regular applications, it can be heap memory allocations, file handles, sockets, resource locks, remote session objects, TCP connections, etc. For OS and embedded systems, that could be a device buffer, bus ownership, config objects, etc.
> > Nearly all of the relevant code is unsafe. There aren't many real benefits.
Yes. This part is a bit weird. The fundamental idea of 'unsafe' is to limit the scope of unsafe operations to as few lines as possible (The same concept can be expressed in different ways. So don't get offended if it doesn't match what you've heard earlier exactly.) Parts that go inside these unsafe blocks are surprisingly small in practice. An entire kernel isn't all unsafe by any measure.
Shitloads of already existing libraries. For example I'm not going to start using it for Arduino-y things until all the peripherals I want have drivers written in Rust.
I think it'll be less like telegram lines- which were replaced fully for a major upgrade in functionality, and more like rail lines- which were standardized and ubiquitous, still hold some benefit but mainly only exist in areas people don't venture nearly as much.
For my hobby code, I'm not going to start writing Rust anytime soon. My code is safe enough and I like C as it is. I don't write software for martian rovers, and for ordinary tasks, C is more ergonomic than Rust, especially for embedded tasks.
For my work code, it all comes down to SDKs and stuff. For example I'm going to write firmware for Nordic ARM chip. Nordic SDK uses C, so I'm not going to jump through infinite number of hoops and incomplete rust ports, I'll just use official SDK and C. If it would be the opposite, I would be using Rust, but I don't think that would happen in the next 10 years.
Just like C++ never killed C, despite being perfect replacement for it, I don't believe that Rust would kill C, or C++, because it's even less convenient replacement. It'll dilute the market, for sure.
When I was in school there were a handful of people (couple dozen in a student body of ~10k) who did it. They'd use the athletics center for shower and the buildings allowed 24hr key access for bathroom. Most of them had jobs in dining services for the free meal per shift. These people weren't looked at as being poor, they were looked at as hard-assed cheapskates who were toughing out a hard lifestyle and consequently saving money in the process and were generally envied for doing something most couldn't stomach. If you can tolerate it it's a great deal if you're a student because you can use student access to facilities to cover a lot of your other needs and it's truly just a place to sleep and the ~20yo body can tolerate the lifestyle. Motorhomes weren't allowed and they were real touchy about box trucks so minivans and conversion vans were mostly what you'd see though there was the occasional station wagon.
And this was not in a mild climate, wikipedia says the mean daily temp in January is 18deg (freedom). That said, I have no doubt that snootier universities didn't allow these sorts of things and would harass the shit out of you with their PD if you tried this there.
I don't want to bemoan this. There's some people who don't have the resources to house a bunch of people but do have authority over a parking lot. I applaud anyone doing what they can for others with what they've got.
It's just that when this came up shortly after april the first this year, there was nothing to it, only circular references, supported by nice sloppy ai-gfx, and nothing else. It was an assumption/speculation, uttered by someone, endlessly repeated and overamplified.
I don't see any reason why it should be different this time.
I loved the campaigns so much that I spent many dollars to play with the campaign editor in a net bar back then. I never figured out how to recreate the Corsair scene at the beginning of Protoss level 2. It was only after many years that I found out that it requires a script not in the official editor — some modders created a new editor that includes all those “unofficial” scripts.
>how do hybrid schemes work out: some home, some office, less commute overall?
I think it depends on the type of work. I work as a support engineer for business stakeholders. Business stakeholders don't work in "Sprints", and always want to get anything ASAP. In that sense, if I want to maximize my value to the company, in-office is the best.
But frankly, I don't like that, so working remote is the best for me, IN THAT PERSPECTIVE. However, I do love the snacks in office, and I want to keep my job, so hybrid works the best for me. The stakeholders get to bug me from time to time in 3 days per week, and I book as many meetings as I can in those 3 days, and bring a non-fiction just to breath a little better.
I just wish Toronto has cheaper housing though, so I can live closer to the office.
Art Deco has always been one of the best architectural styles, IMHO. I also like the pseudo-classical Greek design often found in American government buildings in small towns (city hall, the public library, and so on). They're very different styles, yet they complement each other nicely.
I had a wonderful retro futuristic dream about an automated Costco warehouse a few weeks ago. It was one of the less weird dreams so I still remember it clearly.
Basically, each section is like a closed areas with some windows. Customers order at the computers by the windows and flash their membership cards. Robots glide left and right to move 10 samples to the customer, in an arm with rotating clips. Customers can press a button to rotate the samples, observe them, and place an order by pressing a button. Samples not chosen are temporarily stocked at the window as a “stack”.
In each closed section, there are humans who monitors and maintains the robots, and occasionally fetch samples when robots stop working (hopefully it too often, you know those 9s).
At the exit, a human worker assembles the packages and hand them to the customers with a smile. Customers have a last chance to return unwanted items.
Why was it a retro futuristic dream? Because the customers have the option to go into a bakery to enjoy a cup of coffee/tea, some cake and socialize with fellow customers. All of them looked like the men and women from advertisement from Fallout 4.
What you've re-invented is Keydoozle, from 1937.[1] This was the first automated grocery store. Three stores were opened, but there were enough mechanical problems that it didn't work well.
There were also automats, automated restaurants serving all food through a vending machine (or more accurately, wall). Classically all for a single fixed price (a nickle).
These are featured in several cultural references, such as the 1962 Delbert Mann film That Touch of Mink, and PDQ Bach's "Concerto for Horn and Hardart" (being named after a prominent New York City automat chain).
And what some of us might not have the context for, is that grocery stores at the time were usually clerk-serviced; Just like you don't pump your own gas in New Jersey, at the time the norm was that you handed the clerk a list of products and they fetched them from the shelves for you.
Arguably this model has a great deal of compatibility with robotic compact storage, especially in high-land-value areas.
That kinda stuff is why I'm an incrementalist, as opposed to "Great Man" theories of civilization. A big impressive product or leap-forward is mostly luck and thousands of cascading preconditions on small improvements everywhere else, and often not even the first person to try.
It's not hard to imagine that if a fundamentally similar store today that took the world by storm, there would be a profusion of news stories asserting that the founder is a genius visionary, with nary a peep for Clarence Saunders et al.
I think that's kind of the point: there are no "genius ideas", at least not at the level and frequency popularly portrayed. If teleportation isn't feasible then the idea isn't genius. If teleportation is feasible, then using it for transporting humans isn't genius, it's incredibly obvious.
Or to give a real-world example: The Wright brothers did some great work on making aircraft steerable and doing wind-tunnel tests, but working planes were mostly a product of ICE engines finally reaching sufficient power-to-weight ratios, not of the Wright brothers being unique geniuses. In a long line of people trying to build heavier-than-air aircraft they were simply the first to have access to the necessary technology to make it work
Sounds like the old general store model, you didn’t browse yourself, the shop keep would bring out what you wanted, it was always behind the counter. I experienced this in China when I started visiting in 1999/early 2000s, it’s mostly not like that anymore though. You still have department stores where you need to buy things first before touching them, though.
Oh Service Merchandise was a thing in the USA also, where I was living at in Mississippi at least. It was basically catalog focused store with a showroom.
IKEA is kind of like that also, but you have to get everything yourself after picking it out upstairs. And Sears might have been like this at some point before I was born.
Argos in the UK was similar. You would go into the store and look up the product in a catalog. Then go to counter and order it, wait 2-5 minutes and they give you the product. I found it quite convenient.
Screwfix do this too. Just a counter with a handful of staff who go and get your items.
If you pre-order it's waiting at the desk. Very handy for people who can order from the job site on the account and send the lad round to grab it.
And a (relatively) unshittified website too because if jobbing tradies can't use the damn thing because it's too loaded down with ads and bullshit, they just won't.
They're still there. Was surprised to run into one recently when I was in London (they pulled out of Ireland a while back, and I'd assumed they'd just closed totally at that point, because it _does_ feel like an increasingly marginalised business model.)
They still exist. Tend to be pretty competitive on price, although they must be losing out to online shopping in a lot of places since they don't offer any showroom advantage.
In my experience because you're picking up from the Argos you can do an instant return if you realize you ordered wrong (or the item is rubbish). Not perfect but a good way to get your hands on the product with an easy refund option
Little bit more specialized, but Lee Valley Tools [https://www.leevalley.com/en-ca] stores seem to still operate this way. Showroom (and a few computer kiosks) and order forms up front, then line up for them to pull the items from the back.
Reading the history of Consumers (thanks, I never knew this existed):
>In the 1990s, Consumers Distributing struggled to compete with Zellers and then Walmart Canada. Consumers Distributing sought bankruptcy protection in 1996.
Yeah, that could be true. I'm not sure how many people are similar to me, who are allergic to "window shopping" and just want to buy, pay and exit. My Costco session is less than 30 minutes (from parking to back to car) in average.
I do research price, though, so if they show a big DISCOUNT sign and is more or less honest with it, I'll probably grab some, too.
In the dream customers just walk around and make orders. It’s actually old style I think, but with robots. Yeah it’s a bit like cash and carry, but customers didn’t move into the sections. They just get to browse the samples robots carried to them.
TBH, now that I think about it, the dream was way more vague than what I described in the reply. My brain probably reasoned about the idea subconsciously.
The only exception in warehouse was the cafeteria. I guess my brain wanted to make something retro futuristic so it made the cafeteria “retro” — manned by humans and cooked by humans too. There were even balloons inside now that I recall…
I remember those stores as I came from a similar background. One vital difference is that they all have workers who have a straight face and don’t give it a fuck about customer service.
Then in the 90s they were all washed away by the new ones.
reply