Not to take away from this device, I think it’s pretty neat. But you can run tailscale on anything, even Apple TVs. If you have a Unifi network odds are that you have at least one spare computing device that can run tailscale.
Problem is that I think my Apple TV goes into some sort of deep idle mode where tailscale stops working. So it’s been effectively useless for me when I travel.
Check the Tailscale blog and docs for AppleTV. ISTR reading about an issue like this popping up and they had a workaround of some sort. Never happened to me.
If you compile with -fno-exceptions you just lost almost all of the STL.
You can compile with exceptions enabled, use the STL, but strictly enforce no allocations after initialization. It depends on how strict is the spec you are trying to hit.
Not my experience. I work with a -fno-exceptions codebase. Still quite a lot of std left. (Exceptions come with a surprisingly hefty binary size cost.)
Apparently according to some ACCU and CPPCon talks by Khalil Estel this can be largely mitigated even in embedded lowering the size cost by orders of magnitude.
Yeah. I unfortunately moved to an APU where code size isn't an issue so I never got the chance to see how well that analysis translated to the work I do.
Provocative talk though, it upends one of the pillars of deeply embedded programming, at least from a size perspective.
Not exactly sure what your experience is, but if you work with in an -fno-exceptions codebase then you know that STL containers are not usable in that regime (with the exception of std::tuple it seems, see freestanding comment below). I would argue that the majority of use cases of the STL is for its containers.
So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc).
Of course you can, you just need to check your preconditions and limit sizes ahead of time - but you need to do that with exceptions too because modern operating systems overcommit instead of failing allocations and the OOM killer is not going to give you an exception to handle.
I don't think it would be typical to depend on exception handling when dealing with boundary conditions with C++ containers.
I mean .at is great and all, but it's really for the benefit of eliminating undefined behavior and if the program just terminates then you've achieved this. I've seen decoders that just catch the std::out_of_range or even std::exception to handle the remaining bugs in the logic, though.
Well, it's mostly type definitions and compiler stuff, like type_traits. Although I'm pleasantly surprised that std::tuple is fully supported. It looks like C++26 will bring in a lot more support for freestanding stuff.
No algorithms or containers, which to me is probably 90% of what is most heavily used of the STL.
It's not undermining, it's asserting/showing off their independence. India doesn't want to play for anyone's team, so they play on everyone's team. It's a reminder to all sides that they are not an automatic partner to be taken for granted.
Maybe they wouldn't experience so much pushback if they were more humble, had more respect for established software and practices, and were more open to discussion.
You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW" at everyone all the time and expect people to not react negatively.
> You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW"
It seems you are imagining things and hate people for the things you imagined.
In reality there are situations where during technical discussions some people stand up and with trembling voice start derailing these technical discussions with "arguments" like "you are trying to convince everyone to switch over to the religion".
https://youtu.be/WiPp9YEBV0Q?t=1529
I disagree very strongly that a suggestion to change something is also a personal attack on the author of the original code. That’s not a professional or constructive attitude.
Are you serious? It's basically impossible to discuss C/C++ anymore without someone bringing up Rust.
If you search for HN posts with C++ in the title from the last year, the top post is about how C++ sucks and Rust is better. The fourth result is a post titled "C++ is an absolute blast" and the comments contain 128 (one hundred and twenty eight) mentions of the word "Rust". It's ridiculous.
Lots of current and former C++ developers are excited about Rust, so it’s natural that it comes up in similar conversations. But bringing up Rust in any conversation still does not amount to a personal attack, and I would encourage some reflection here if that is your first reaction.
To be clear, the "you" and "my" in your sentence refer to the same person. Julian appears to be the APT maintainer, so there's no compulsion except what he applies to himself.
(Maybe you mean this in some general sense, but the actual situation at hand doesn't remotely resemble a hostile unaffiliated demand against a project.)
Most of the repelling is happening on the anti-Rust side. The hate and vitriol has chased away Wedson Almeida Filho, Alex Gaynor, Hector Martin and Christoph Hellwig from the Rust in Linux project.
No, honestly Rust has just really crappy attitude and culture. Even as a person who should naturally like Rust and I do plan to learn it despite that I find these people really grating.
It seems that AMD, like many other companies, doesn’t “get” software. It’s a cost-center, a nuisance, not really hard engineering, the community will take care of that, etc. It’s pretty ironic.
From the performance comparison table, basically AMD could be NVIDIA right now, but they aren’t because… software?
That’s a complete institutional and leadership failure.
Ironically, building chips is the actual _hard_ part. The software and the compilers are not trivial but the iteration speed is almost infinite by comparison.
It goes to show that some companies just don’t “get” software. Not even AMD!
Funnily enough AMD was actually the first with GPGPU... they just floundered and managed to start 3 or more completely new software stacks for it, while CUDA focused not just on keeping one backward compatible one, but also made it work from cheapest NVS card to high end parts.
You're saying it like hardware and software are disjoint. You design hardware with software in mind (and vice versa); you need to if you want performance rivaling nvidia. This codesign, seeing their products are not only usable but actually tailored to maximize resource utilization in real workloads (not driven by w/e benchmarks), is where AMD seems to lack.
Why oversimplify the premise and frame your take as some 'proof'. Just use the term counter-argument/example
Right, binary gates are discrete elements but neural networks operate on a continuous domain.
I'm reminded of the Feynman anecdote when he went to work for Thinking Machines and they gave him some task related to figuring out routing in the CPU network of the machine, which is a discrete problem. He came back with a solution that used partial differential equations, which surprised everyone.
Any of the "design patterns" listed in the article will have a ton of popular open source implementations. For structured generation, I think outlines is a particularly cool library, especially if you want to poke around at how constrained decoding works under the hood: https://github.com/dottxt-ai/outlines
reply