I hope they won't kill it, but you can experience it even leaner on: https://teletekst-data.nos.nl/webplus?p=101. It's my preferred way of quickly checking the news in The Netherlands, and it works on all devices.
There's nothing stopping someone from never buying another game on Steam, and moving to another marketplace on PC, unlike the store monopolies on consoles and mobile devices.
Except for the (large) Steam library of games you already have on Steam.
That's a weak form of lock-in though, because you can switch to another platform going forward, and access your previous Steam purchases for free. No ads, no subscription fee. Stay signed out of Steam Chat and the social stuff, and you've basically just got a heavy application launcher.
Compared to PS+ and Xbox Live, which charge subscription fees to continue accessing online content, it's a pretty sweet deal for the consumer.
GP’s point was that you don’t need to close your Steam account because it doesn’t cost anything: you only pay for the individual games, not Steam itself.
EDIT: I believe GP is incorrect about the nature of subscription fees for PS+ and Xbox Live, though. As far as I know, standalone purchases of games from those services do not ever require a paid subscription - you need to retain your account and connectivity for the license checks, but that's free and does not require a paid subscription on either service, so pretty much the same as Steam. But they are correct that those platforms don't provide other store options. EDIT 2: Ah, I misread GP, they said "online content", and maintaining subscriptions on those services is required for that.
It's not any form of lock-in or anti-competitiveness, and it's not an aspect that's specific to Steam. You actually need to substantiate that instead of just claiming it. Almost all the online digital platforms do this, even non-gaming ones, and it's weird that it's only being argued here because it's about Steam.
This is a digital media rights issue, not a Steam issue.
I phrased that as "never buying another game on Steam" because of the existing library aspect. Sure it's nice to just have one launcher, but you can absolutely move to another marketplace while using Steam for the games that you've already bought.
You can keep Steam installed and keep downloading your games through it, but you don't need to give them another penny.
The other platforms on PC are exactly the same though. If you're against Steam because of this particular aspect, you should be against almost all platforms on PC as well as every console and phone platform. This is how it works on almost every digital media platform. It's not sufficient reason to treat Steam like a monopoly, because in itself it's not anti-competitive behaviour.
It's also not solvable unless you legislate platform agnostic licenses that are valid regardless of platform. Fat chance of that ever happening and I doubt that's actually what you're suggesting.
Sure, but it works on all pc platforms and the runtime is pretty light and can be pretty easily circumvented from what I’ve heard.
Of course it will never be as easy as having a single storefront for all your content, but as we can see from the streaming market, that’s not something that will happen either way.
I still don't understand how these arguments make sense for new code. Naturally, sizes should be unsigned because they represent values which cannot be unsigned. If you do pointer/size arithmetic, the only solution to avoid overflows is to overflow-check and range-check before computation.
You cannot even check the signedness of a signed size to detect an overflow, because signed overflow is undefined!
The remaining argument from what I can tell is that comparisons between signed and unsigned sizes are bug-prone. There is however, a dedicated warning to resolve this instantly.
It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.
Given this, I can't understand the justification. I'm currently using unsigned sizes. If you have anything contradicting, please comment :^)
C offers a different solution to the problem in Annex K of the standard. It provides a type `rsize_t`, which like `size_t` is unsigned, and has the same bit width, but where `RSIZE_MAX` is recommended to be `SIZE_MAX >> 1` or smaller. You perform bounds checking as `<= RSIZE_MAX` to ensure that a value used for indexing is not in the range that would be considered negative if converted to a signed integer. A negative value provided where `rsize_t` is expected would fail the check `<= RSIZE_MAX`.
IMO, this is a better approach than using signed types for indexing, but AFAIK, it's not included in GCC/glibc or gnulib. It's an optional extension and you're supposed to define `__STDC_WANT_LIB_EXT1__` to use it.
I don't know if any compiler actually supports it. It came from Microsoft and was submitted for standardization, but ISO made some changes from Microsoft's own implementation.
This is an interesting middle ground. As ncruces pointed out in a sibling comment, the sign bit in a pointer cannot be set without contradicting the ptrdiff_t type. That makes this seem like a reasonable approach to storing sizes.
> It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.
You can, since the number of bits is the same. The mapping of pointer bits to signed integer bits will mean that you can't then do arithmetic on the resulting integers and get meaningful results, but the behavior of such shenanigans is already unspecified with no guarantees other than you can get an integer out of a pointer and then convert it back later.
But also, semantically, what does it even mean to convert a single pointer to a size? A size of an object is naturally defined as the count of chars between two pointers, one pointing at the beginning of the object, the other at its end. Which is to say, a size is a subset of pointer difference that just happens to always be non-negative. So long as the implementation guarantees that for no object that non-negative difference will always fit in a signed int of the appropriate size, it seems reasonable to reflect this in the types.
I've of course read his argument before, and I think it might be more applicable to C++. I exclusively program in C, and in that regard, the relevant aspects as far as I can tell wouldn't be clearly in favour of a signed type. I also think his discussion on iterator signedness mixes issues with improper bounds checking and attributes it to the size type signedness. What remains I cannot see justify using the a signed type other than "just because". I'm not sure it's applicable to C.
In the implementation of something like a deque or merge sort, you could have a variable that represents offsets from pointers but which could sensibly be negative. C developers culturally aren't as particular about theoretical correctness of types as developers in some other languages - there's a lot of implicit casting being used - so you'll typically see an `int` used for this. If you do wish to bring some rigidity to your type system, you may argue that this value is distinct from a general integer which could be used for any arithmetic and definitely not just a pointer. So it should be a signed pointer difference.
Arrays aren't the best example, since they are inherently about linear, scalar offsets, but you might see a negative offset from the start of a (decayed) array in the implementation of an allocator with clobber canaries before and after the data.
I don't think you can make such a broad statement and be correct in all cases. Negative pointer arithmetic is not by itself a reason to use signed types, except if you are:
1. Certain your added value is negative.
2. Checking for underflows after computation, which you shouldn't.
> It makes sense that you should be able to assign a pointer to a size. If the size is signed, this cannot be done due to its smaller capacity.
Why?
By the definition of ptrdiff_t, ISTM the size of any object allocated by malloc cannot be out of bounds of ptrdiff_t, so I'm not sure how can you have a useful size_t that uses the sign bit?
"Naturally, sizes should be unsigned because they represent values which cannot be unsigned."
Unsigned types in C have modular arithmetic, I think they should be used exclusively when this is needed, or maybe if you absolutely need the full range.
For signed types you can can also get a run-time trap on overflow, making it safe to use. Bug caused by unsigned wraparound are extremely hard to find.
Pointer arithmetic that could overflow would probably involve a heap and therefore be less likely to require a relative, negative offset. Just use the addresses and errors you get from allocation.
Yes, but there are definitely cases where this doesn't apply, for example when deriving an offset from a user pointer. As such this is not a universal solution.
Huawei doesn't need to match Nvidia, because their clients don't have access to Nvidia. They can claim they match Nvidia, and the claim may be an exaggeration, but the market won't punish them, since there are no alternatives. AMD and Intel don't have that luxury. If they deliver 80% of what Nvidia delivers (at the same price point), they get to sell roughly speaking zero.
AMD/Intel have to compete with Nvidia globally (or whatever % of trusted partners). They have to match NVIDIA price/performance.
Huawei doesn't / won't have to compete with Nvidia for long, so they can compete on performance, i.e. using more expensive (less efficient) processes, and more power hungrier chips. The TCO could be higher, but they also don't have to take 40-50% margin like Nvidia, and PRC is less power cost constrained. So possible to make a chip that cost 40% to fabricate vs 20%, but only take 20% of profit instead of 40%. Possible that TCO of that ship uses 2x more power... but local power costs 50%.
Funny how as a Chinese, I told my Canadian friend that I'd bet on one of the Chinese companies catching up to Nvidia (at least partly) before AMD or Intel.
Looking at long term, STEM supply drives these rapidly advancing markets. We have China training so many engineers of all stripes there is a chronic oversupply, cut throat competition between engineers, with the ones who don't make it working in noodle shops. We also have the USA, who has 1/3 the population, is seeing record low attendance at it's colleges and currently attacking it's own universities. Short of another Mao style political upheaval, it's difficult to see how the USA's crown in technological leadership won't be handed over to China in time, if it hasn't already.
That said, smaller countries do out-compete the bigger ones in their specialised areas of expertise despite being many times smaller. Finland builds great ice-breakers, Australia does mining and medical, New Zealand does food products. The same will hold for the USA. They will do perfectly fine once the crown passes, just like the other smaller fish do now. Well, assuming the current kerfuffle doesn't turn into a Mao sized disaster. Right now it does have the same anti-intellectual stench to it, but you will stomp on it before it gets completely out of hand. Surely?
That's a gift from Trump. He locked NVidia and opened the rest of the world to competitors. This is like shooting in the foot. No donations needed, by I expect they will in 2028.
I don't know how well this makes you understand your dependencies. As for C/C++ a lot of people probably depend on stb single header files libraries. There's stb_truetype but it specifically mentions not to use it on any untrusted/outside .ttf files which I do like but you have to keep in mind to bake to bitmaps or only use your own .ttf provided files, thus I would put this dependency in another place like tooling. Is there a way to do this in other languages like JS and NPM? Maybe carefully choosing which dependencies you include is better?
I wonder whether it'll be possible to compress enough of the game to make (almost) every possible scenario that you could encounter in the game be playable. Same issue that the previous AI experiment for Minecraft and others had is that objects and enemies seem to pop in and out of nowhere. Could the "learned" probability be high enough for this never to be an issue? You ever think you're seeing something in real life but it's just an optical illusion, it kinda feels like that to me. Obviously this still requires an entire game to be made before you can train on it, but could maybe open up other development and testing of games.
> Obviously this still requires an entire game to be made before you can train on it, but could maybe open up other development and testing of games.
The idea of developing a game where the "code" is totally opaque and non-deterministic honestly sounds like an absolute nightmare. How would you even begin to QA something like that?
> The idea of developing a game where the "code" is totally opaque and non-deterministic honestly sounds like an absolute nightmare. How would you even begin to QA something like that?
I have a fear that we are going to experience a significant regression in our ability to develop software as new "programmers" normalize the idea of "generating" "code" this way. Some kind of dystopian future where people who think an "is-negative" module is a good idea, but coupled with that module having been "generated" by "AI". Bone chilling.
Re: QA
Clearly we just need another generative "AI" to act as QA in an adversarial capacity to the "AI" generating the "code“. Turtles all the way down.
This proposed direction is even worse than generating code, it's eliminating code altogether. The project "source" would just be a big blob of weights that you indirectly prod and poke until it hopefully does what you want, and nobody could understand exactly what's going on under the hood even if they wanted to.
Computer "programs" being big hairy balls of "intent" derived from some corpus of inputs and a prompt is horrifying.
I kind of wish hardware hadn't gotten fast enough to enable this future. Humans being lazy, as they are, and the output of this kind of horror show eventually being "good enough", this is going to get normalized.
Anybody working to enable this future is, to me, acting unethically. This is the "AI" apocalypse I'm worried about, not the AGI singularity fever dreams that garner headlines.
Worse yet: that big blob of weights only works at all because it's been trained on a huge corpus of data from existing games. Doing anything actually novel in a game - like implementing new game mechanics, or even using a distinctive art style - would be next to impossible.
ssh teletekst.nl