Isn't it that all devices that support OpenWrt cannot run at 1gps speeds (because of no available chip drivers)? For this reason I just replaced my Archer c7 which capped at ~300mbps. Now I run with mikrotik router. Very flexible but steep learning curve.
* Libraries: there is no NPM ecosystem to get anything you need.
* Stack-overflow: If you looking for a aswers there might not be anyone who encounters it before. You might have to dig really deep to find some thing.
* Some times you might run into a compiler bug usually related with performance of the generated code. Like it generates correct code bug it's slow for no reason and minor changes to the code make it fast again.
* Relying on OpenSSL especially v3 especially on windows is a big problem, but thats more on openSSL i think. I actually wrote a library around this that uses platforms HTTP/SSL instead: https://github.com/treeform/puppy
* Not having HTTP gzip support in standard library. You can always work around with zippy though: https://github.com/guzba/zippy
* async stack traces are really hard to read.
* not enough docs around the different ways to do threading. There is no one solution some times you want a quick thing, some times you are doing CPU tasks other times you are doing network tasks (where async is better). But many big languages struggle here, there is no one fits all threading solution.
It's definitely not style case insensitivity which everyone loves to bike-shed about.
For the ecosystem, https://nimble.directory/ is listing quite a few packages. It's still nowhere near the size of the JavaScript ecosystem, but it's a good start.
Note the async stack traces have gotten better recently! They mostly report like other stack traces now. I actually forgot I was using async for a bit. Theres still some bugs I think.
A lot of the cons are a matter of personal taste, such as the whitespaced block (like python), or the weird variable case management (only the case of the first letter is case sensitive and the variable are underscore insensitive).
The biggest objective cons are the small community, and possibly for some people the usage of a GC The GC can be not used but I don't know if it's really practical since it's not something that I need, I just saw people complaining.
The super fast libraries mention in the article Pixie, Zippy, SuperSnappy all use GC. They can beat or tie with the best C libs. But in order to be fast they don't use GC in the critical hot paths but pre-allocate work buffers to the right size or allocate stack objects instead. Surprisingly, going outside the bounds of the GC does not feel that foreign or wried in Nim. It's not hard.
Having many files with same name, differing in case only (a cultural 'thing') is just a mess - there's no extra value here. Do you use case in spoken word ?
Arguments of performance or ease of implementation are technical details nobody cares (except devs of course :) )
We just end up living with this decission and there's no easy way out.
Can't you trivially solve this without addressing any of the 17 problems with case insensitivity by just not creating afile.txt and aFile.txt. In TB of data including files from the last 17 years I don't think I have any folders like that.
3delight also traces SDS and nurbs analitycally. In offline renderers geometry data constitutes to most of memory usage. Texture RAM usage is kept under a few GB by use of page-based cache.
Multiple gigabytes of geometry is a lot. That ends up working out to potentially dozens of hundreds of polygons per pixel. Even so the person I was replying to seemed to wonder why everything converted to polygons, which is because of a more holistic pragmatism.
Agree in general about geometry memory though: in high-end VFX, displacement is pretty much always used, so you have to dice down to micropoly for hero assets (unless they're far away or out of frustum).