> Exactly, designing a 'third place' that isn't alcohol focused seems to be a tough nut to crack.
how so? I go to a climbing gym and it is a pretty social (and, of course, healthy) activity... crossfit is not my thing but apparently it is similar for more traditional workouts. to the extent you can consider a cycling or running club a "space" those are similar. dog parks for dog owners, playgrounds for parents, etc...
Many of those lack spontaneity though. I don’t walk past a climbing gym with a friend of mine and think “fancy popping in there for an hour or so?” You need to plan a visit to many of those places so you have the right clothing/footwear/etc.
The social point of a pub is that you can just decide to go in on a whim. Pubs are increasingly not about alcohol either. I’ve had a few instances in the last couple of years where I couldn’t drink alcohol for extended periods (various reasons, mostly medication related). Hasn’t stopped me going to the pub.
Years ago you would get an odd look if a group walked into a pub and all ordered soft drinks but not so much now (well, you still will get that in some pubs).
Obviously I’m not out looking for another place to buy a lime and soda after midnight but I can quite happily have an evening out without having to drink alcohol whilst others do or don’t around me.
Of course not, but I'd rephrase what the OP said as something more like "it's unrealistic to expect them to go 'hey, guess what, never mind about all that' after a half a year.
I think it's more realistic to expect that they're going to stick with a UI officially called "Liquid Glass" for the next decade, but it's going to go through some serious iterative changes in the next couple of years -- probably much more than it would have were Alan Dye still around.
is there some reason to implement it as a time limit instead of iterations or something else deterministic? it being affected by CPU speed or machine load seems obvious.
or whatever makes sense if “iterations” isn’t a thing, I know nothing about chess algorithms
It’s simpler. Chess is a search through the space of possible moves, looking for a move that’s estimated to be better than the best move you’ve seen so far.
The search is by depth of further moves, and “better” is a function of heuristics (explicit or learned) on the resulting board positions, because most of the time you can’t be sure a move will inevitably result in a win or a loss.
So any particular move evaluation might take more or less time before the algorithm gives up on it—or chooses it as the new winner. To throw a consistent amount of compute at each move, the simple thing to do is give the engine consistent amounts of time per move.
> To throw a consistent amount of compute at each move, the simple thing to do is give the engine consistent amounts of time per move.
The simple thing to do is give it a limit on the total number of states it can explore in its search.
If your goal is consistency, wall-clock time makes no sense. If I run 'make -j20', should the chess computer become vastly easier because the CPU is being used to compile, not search? Should 'nice -n 20 <chess app pid>' make the chess computer worse?
Should my computer thermal-throttling because it's a hot day make the chess computer worse, so chess is harder in winter?
If the goal is consistency, then wall-clock isn't the simple way to do it.
> It’s simpler than doing a limit on number of states
According to who?
A counter that you ++ each move sounds a lot easier to me than throwing off a separate thread/callback to handle a timer.
> Doing a time limit also enforces bot moving in a reasonable time.
It's designed for specific hardware, and will never have to run on anything significantly slower, but might have to run on things significantly faster. It doesn't need a time cutoff that would only matter in weird circumstances and make it do a weirdly bad move. It needs to be ready for the future.
> It puts a nice limit to set up a compromise between speed and difficulty.
Both methods have that compromise, but using time is way more volatile.
A time limit is also deterministic in some sense. Level settings used to be mainly time based, because computers at lower settings were no serious competition to decent players, but you don't necessarily want to wait for 30 seconds each move, so there were more casual and more serious levels.
Limiting the search depth is much more deterministic. At lower levels, it has hilarious results, and is pretty good at emulating beginning players (who know the rules, but have a limited skill of calculating moves ahead).
One problem with fixed search depth is that I think most good engines prefer to use dynamic search depth (where they sense that some positions need to be searched a bit deeper to reach a quiescent point), so they will be handicapped with a fix depth.
> One problem with fixed search depth is that I think most good engines prefer to use dynamic search depth (where they sense that some positions need to be searched a bit deeper to reach a quiescent point), so they will be handicapped with a fix depth.
Okay, but I want to point out nobody was suggesting a depth limit.
For a time-limited algorithm to work properly, it has to have some kind of sensible ordering of how it evaluates moves, looking deeper as time passes in a dynamic way.
Switch to an iteration limit, and the algorithm will still have those features.
The section "The Problem of the Individual-Element Mindset" bugs me quite a bit, the core of it being:
> This architectural mindset does lead to loads of problems as a project scales. Unfortunately, a lot of people never move past this point on their journey as a programmer. Sometimes they do not move past this point as they only program in a language with automatic memory management (e.g. garbage collection or automatic reference counting), and when you are in such a language, you pretty much never think about these aspects as much.
Billions of dollars worth of useful software has been shipped in languages with garbage collection or ARC: roughly the entire Android (JVM) and iOS (ARC) application ecosystems, massively successful websites built on top of JVM languages, Python (Instagram etc.), PHP (Wikipedia, Facebook, ...).
In game development specifically, since there's a Casey Muratori video linked here, we have the entire Unity engine set of games written in garbage-collected C#, including a freaking BAFTA winner in Outer Wilds. Casey, meanwhile, has worked on a low-level game development video series for a decade and... never actually shipped a game?
I don't think Casey has ever claimed to be the developer of Bink 2. He usually brings it up when explaining the kind of work that was performed at Rad Game Tools, then explicitly states that his work was largely on Granny 3d in particular.
Remember the Muratori v Uncle Bob debate? Back then the ad-hominems were flying left and right, with Muratori being the crowd favourite (a real programmer) compared to Uncle Bob (who allegedly didn't write software).
Then a few months ago Muratori gave a really interesting multi-hour talk on the history of OOP (including plenty of well-thought out criticism). I liked the talk, so I fully expected a bunch of "real programmers" to shoot that talk down as academic nonsense.
Anyway, looks like Muratori is right on schedule to graduate from programmer to non-programmer.
> the entire Unity engine set of games written in garbage-collected C#, including a freaking BAFTA winner in Outer Wilds.
Some of those games (though not all of them, unfortunately) try to work around C#'s garbage collector for performance reasons using essentially adhoc memory allocators via object pools and similar approaches. This is probably what this part...
---
And if you ever do think about these, it’s usually because you are trying to do something performance-oriented and you have to pretend you are managing your own memory. It is common for many games that have been written with garbage collected languages to try to get around the inadequacies of not being able to manage your own memory
---
...is referring to.
> Casey, meanwhile, has worked on a low-level game development video series for a decade and... never actually shipped a game?
These videos are all around 90-120 minutes long, each posted with gaps between them since the previous (my guess is whenever Casey had time) and the purpose and content of these videos is pedagogical so he spends time explaining what he does - they aren't just screencasts of someone writing code.
If you combine the videos and assuming someone works on it 6h/day with workdays alone it'd take around 8-9 months to write whatever is written there but this also ignores the amount of time spent on explanations (which is the main focus of the videos).
So it is very misleading to use the series as some sort of measure for what it'd take Casey (or anyone else, really) to make a game using "low level" development.
Yes, of course, but the actual games are written (primarily! sometimes they will optimize something when actually necessary with a lower-level language! that's great!) in the scripting language.
The OP implies heavily that writing a program in a language with anything but pure manual memory management makes you lesser as a programmer than him: "Unfortunately, a lot of people never move past this point on their journey as a programmer" implies he has moved further on in his "journey" than those that dare to use a language with GC.
(and with respect to C++ note that OP considers RAII to be deficient in the same way as GC and ARC)
Indeed to the extent that Casey has a point here (which sure, I think in its original context it was fine, it's just unfortunate if you mistook a clever quip at a party for a life philosophy) C++ is riddled with types explicitly making this uh, choice.
Its not near the top of the list of reasons std::unordered_map is a crap type, but it's certainly on there. If we choose the capacity explicitly knowing we'll want to put no more than 8340 (key,value) pairs into Rust's HashMap we only allocate once, to make enough space for all 8340 pairs because duh, that's what capacity means. But std::unordered_map doesn't take the hint, it merely makes its internal hash table big enough, and each of the 8340 pairs is allocated separately.
> If you want to make pointers not have a nil state by default, this requires one of two possibilities: requiring the programmer to test every pointer on use, or assume pointers cannot be nil. The former is really annoying, and the latter requires something which I did not want to do (which you will most likely not agree with just because it doesn’t seem like a bad thing from the start): explicit initialization of every value everywhere.
In Kotlin (and Rust, Swift, ...) these are not the only options. You can check a pointer/reference once, and then use it as a non-nullable type afterwards. And if you don't want to do that, you can just add !!/!/unwrap: you are just forced to explicitly acknowledge that you might blow up the entire app.
stacked diffs are the best approach and working at a company that uses them and reading about the "pull request" workflow that everyone else subjects themselves to makes me wonder why everyone is not using stacked diffs instead of repeating this "squash vs. not squash" debate eternally.
every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.
> First, if you aren't writing device drivers/kernels or something very low level there is a high probability your program will have zero unsafe usages in it.
from the original comment. Meanwhile all C code is implicitly “unsafe”. Rust at least makes it explicit!
But even if you ignore memory safety issues bypassed by unsafe, Rust forces you to handle errors, it doesn’t let you blow up on null pointers with no compiler protection, it allows you to represent your data exhaustively with sum types, etc etc etc
Because writing proper kernel C code requires decades of experience to navigate the implicit conventions and pitfalls of the existing codebase. The human pipeline producing these engineers is drying up because nobody's interested in learning that stuff by going through years of patch rejection from maintainers that have been at it since the beginning.
Rust's rigid type system, compiler checks and insistence on explicitness forces a _culture change_ in the organization. In time, this means that normal developers will regain a chance to contribute to the kernel with much less chance of breaking stuff. Rust not only makes compiled binary more robust but also makes the codebase more accessible.
I am quite certain that someone who has been on HN as long as you have is capable of understanding the difference between 0% compiler-enforced memory safety in a language with very weak type safety guarantees and 95%+ of code regions even in the worst case of low-level driver code that performs DMA with strong type safety guarantees.
The first two is the same article, but they point out that certain structures can be very hard to write in rust, with linked lists being a famous example. The point stands, but I would say the tradeoff is worth it (the author also mentions at the end that they still think rust is great).
The third link is absolutely nuts. Why would you want to initialize a struct like that in Rust? It's like saying a functional programming language is hard because you can't do goto. The author sets themselves a challenge to do something that absolutely goes against how rust works, and then complains how hard it is.
If you want to do it to interface with non-rust code, writing a C-style string to some memory is easier.
And it can easily be more than 5%, since some projects both have lots of large unsafe blocks, and also the presence of an unsafe block can require validation of much more than the block itself. It is terrible of you and overall if my understanding is far better than yours.
And even your argument taken at face value is poor, since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall. And Rust specifically have developers use unsafe for some algorithm implementations, for flexibility and performance.
> since if it is much harder, and it is some of the most critical code and already-hard code, like some complex algorithm, it could by itself be worse overall.
(Emphasis added)
But is it worse overall?
It's easy to speculate that some hypothetical scenario could be true. Of course, such speculation on its own provides no reason for anyone to believe it is true. Are you able to provide evidence to back up your speculation?
Is three random people saying unsafe Rust is hard supposed to make us forget about C’s legendary problems with UB, nil pointers, memory management bugs, and staggering number of CVEs?
You have zero sense of perspective. Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it) we’re talking about a tiny fraction of the overall code of Rust programs in the wild. You have to pay careful attention to C’s issues virtually every single line of code.
With all due respect this may be the singular dumbest argument I’ve ever had the displeasure of participating in on Hacker News.
> Even if we accept the premise that unsafe Rust is harder than C (which frankly is ludicrous on the face of it)
I think there's a very strong dependence on exactly what kind of unsafe code you're dealing with. On one hand, you can have relatively straightforwards stuff like get_unsafe or calling into simpler FFI functions. On the other hand, you have stuff like exposing a safe, ergonomic, and sound APIs for self-referential structures, which is definitely an area of active experimentation.
Of course, in this context all that is basically a nitpick; nothing about your comment hinges on the parenthetical.
Well, you're the one asking for a comparison with C, and this subthread is generally comparing against C, so you tell us.
> Modern C++ provides a lot of features that makes this topic easier, also when programs scale up in size, similar to Rust. Yet without requirements like no universal aliasing. And that despite all the issues of C++.
Well yes, the latter is the tradeoff for the former. Nothing surprising there.
Unfortunately even modern C++ doesn't have good solutions for the hardest problems Rust tackles (yet?), but some improvement is certainly more welcome than no improvement.
> Which is wrong
Is it? Would you be able to show evidence to prove such a claim?
The only thing I really found weird syntactically when learning it was the single quote for lifetimes because it looks like it’s an unmatched character literal. Other than that it’s a pretty normal curly-braces language, & comes from C++, generic constraints look like plenty of other languages.
Of course the borrow checker and when you use lifetimes can be complex to learn, especially if you’re coming from GC-land, just the language syntax isn’t really that weird.
Agreed. In practice Rust feels very much like a rationalized C++ in which 30 years of cruft have been shrugged off. The core concepts have been reduced to a minimum and reinforced. The compiler error messages are wildly better. And the tooling is helpful and starts with opinionated defaults. Which all leads to the knock-on effect of the library ecosystem feeling much more modular, interoperable, and useful.
I feel like it is the opposite, Go gives you a ton of rope to hang yourself with and hopefully you will notice that you did: error handing is essentially optional, there are no sum types and no exhaustiveness checks, the stdlib does things like assume filepaths are valid strings, if you forget to assign something it just becomes zero regardless of whether it’s semantically reasonable for your program to do that, no nullability checking enforcement for pointers, etc.
Rust OTOH is obsessively precise about enforcing these sort of things.
Of course Rust has a lot of features and compiles slower.
> the stdlib does things like assume filepaths are valid strings
A Go string is just an array of bytes.
The rest is true enough, but Rust doesn't offer just the bare minimum features to cover those weaknesses, it offers 10x the complexity. Is that worth it?
how so? I go to a climbing gym and it is a pretty social (and, of course, healthy) activity... crossfit is not my thing but apparently it is similar for more traditional workouts. to the extent you can consider a cycling or running club a "space" those are similar. dog parks for dog owners, playgrounds for parents, etc...
reply