They are not free, they are paid for by taxes. And in pretty much all countries, irrespective of funding model, these services have increased in price much faster than general inflation. This is the Baumol effect in action.
I imagine personal trainers and childcare workers would see a drop in demand and perhaps also an increase in supply if a bunch of people suddenly lost their jobs to AI.
Nice! The visualization makes it easy to compare when cities are in the same hemisphere. It would be cool if there was an option to align the seasons when comparing cities in different hemispheres.
Nice observation ;-) If I'm reading the underlying data[0] correctly, it looks like the threshold for DEHT is significantly lower in the Vanilla tests (<4,500ng) vs the Strawberry tests (<22,500ng)
1) Battery size/life compromised by form factor (or have cumbersome wire to battery pack in pocket)
2) WiFi/5G connectivity - form factor seems to compromise antenna design, and anyways health impact of antenna on your head all day is unknown
3) Fashion - most people care more about appearance than any debatable benefit smart glasses might have (AR?) over a smart phone
4) Smart glasses AR displays are a cool piece of tech, but quality-wise nowhere near that of a phone screen, used for photos and videos - TikTok on smart glasses ?
5) Texting seems a very popular type of communication for all age groups, and much preferred (and more discreet) that having to voice dictate into smart glasses, or listen to incoming messages via smart glasses speakers
It just seems that a smartphone provides so much, checks so many boxes in terms of features and usability, that most people won't want to not have one, and if you do have one, then the incremental benefit of another device becomes minimal.
My experience so far has been: if I know what I want well enough to explain it to an LLM then it’s been easier for me to just write the code. Iterating on prompts, reading and understanding the LLM’s code, validating that it works and fixing bugs is still time consuming.
It has been interesting as a rubber duck, exploring a new topic or language, some code golf, but so far not for production code for me.
Okay, but as soon as you need to do the same thing in [programming language you don't know], then it's not easier for you to write the code anymore, even though you understand the problem domain just as well.
Now, understand that most people don't have the same grasp of [your programming language] that you have, so it's probably not easier for them to write it.
It can offer an advantage over the built-in caching, but it depends on your exact access patterns. For example, if you are running ClickHouse on multiple servers and accessing the same reference data, it's more efficient to cache that data in a centralized location (like Regatta) instead of on the disk of each individual instance.
Philosophically, our goal is to build a standard that can be used in these kinds of applications moving forward, so that application developers don't need to build streaming over and over again and users don't need to learn how to configure each individual systems' caching.
> and I'd love to to hunt this down and fix it but have no clue where to start. I guess the first step would be to consistently reproduce the bug...
I am not familiar with Chromium at all, and I also don't run Linux on the desktop as I'm guessing from your video you do (?) so take this with a grain of salt...
I would start looking at the focus and key event handlers. e.g. maybe log the contents of pressed_keys and/or step thru the code from the beginning of the focus handler. It looks like this might be the place:
Even if you can't repro it, you may be able to figure out the issue by just reading thru that code with some theories in mind. e.g. Since pressing another key seems to fix it, maybe look at what the code is doing there... my guess is the release event fixes whatever corrupted state it is in upon focus.
Even in places where these services are expensive, it does not seem to be because the workers are highly paid.
reply