Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxatar's commentslogin

Interesting, I actually find LLMs very useful at debugging. They are good at doing mindless grunt work and a great deal of debugging in my case is going through APIs and figuring out which of the many layers of abstraction ended up passing some wrong argument into a method call because of some misinterpretation of the documentation.

Claude Code can do this in the background tirelessly while I can personally focus more on tasks that aren't so "grindy".


They are good at purely mechanical debugging - throw them an error, they can figure out which line threw it, and therefore take a reasonable stab at how to fix it. Anything where the bug is actually in the code, sure, you'll get an answer. But they are terrible at weird runtime behaviors caused by unexpected data.

Yes, if you exclude about half of the U.S. population (40% of Americans are obese) [1] then the U.S. has life expectancy that is on par with the rest of the developed world.

[1] https://www.cdc.gov/nchs/fastats/obesity-overweight.htm


What role does access to health care have I wonder. Canada and Australia (well, ALL other developed nations) have universal healthcare. I know that in the states past 65 they do, but not getting proper health care before then surely puts people at risk of dying earlier. Also, what is the venn diagram of people who are obese and don't have affordable access to healthcare - double whammy.

The American demographic with the highest rate of being uninsured, Hispanics, have life expectancies comparable to the UK average. So access to healthcare probably plays some role, but it seems like there are lots of other confounding factors.

It also doesn’t seem to vary that much by state level healthcare differences. States vary quite a bit by geography, but states that didn’t do Medicaid expansion, like Wisconsin and Wyoming, seem to have similar rates to neighboring states that did.


>White people in the US have comparable rates to Europe.

No it's not. White non Hispanic population in the U.S. has a life expectancy of 77.5, which is lower than the U.S. average life expectancy and comparable to Eastern Europe, but not Europe as a whole (life expectancy of 81.4).


I interpreted OP's post to say that you take a C file after the preprocessor has translated it. How you perform that preprocessing can simply be by passing the file to an existing C preprocessor, or you can implement it as well.

Implementing a C preprocessor is tedious work, but it's nothing remotely complex in terms of challenging data structures, algorithms, or requiring sophisticated architecture. It's basically just ensuring your preprocessor implements all of the rules, each of which is pretty simple.


And you had the same misunderstanding as OP. Because you have eliminated all macros during the preprocessor step, you can no longer have macro-based APIs, including function-like macros you expect library users to use, #ifdef blocks where you expect user code to either #define or #undef, and a primitive form of maintaining API compatibility but not ABI compatibility for many things.

It’s a cute learning project for a student of computer science for sure. It’s not remotely a useful software engineering tool.


Our points of view are probably not too far off, really. Remember this whole thought-experiment is counterfactual: we're imagining what "automatic extraction of function declarations from a .c file" would have looked like in the K&R era, in response to claims (from 50 years later) that "No sane programming language should require a duplication in order to export something" and "The .h could have been a compiler output." So we're both having to imagine the motivations and design requirements of a hypothetical programmer from the 1970s or 1980s.

I agree that the tool I sketched wouldn't let your .h file contain macros, nor C99 inline functions, nor is it clear how it would distinguish between structs whose definition must be "exported" (like sockaddr_t) and structs where a declaration suffices (like FILE). But:

- Does our hypothetical programmer care about those limitations? Maybe he doesn't write libraries that depend on exporting macros. He (counterfactually) wants this tool; maybe that indicates that his preferences and priorities are different from his contemporaries'.

- C++20 Modules also do not let you export macros. The "toy" tool we can build with 1970s technology happens to be the same in this respect as the C++20 tool we're emulating! A modern programmer might indeed say "That's not a useful software engineering tool, because macros" — but I presume they'd say the exact same thing about C++20 Modules. (And I wouldn't even disagree! I'm just saying that that particular objection does not distinguish this hypothetical 1970s .h-file-generator from the modern C++20 Modules facility.)

[EDIT: Or to put it better, maybe: Someone-not-you might say, "I love Modules! Why couldn't we have had it in the 1970s, by auto-generating .h files?" And my answer is, we could have. (Yes it couldn't have handled macros, but then neither can C++20 Modules.) So why didn't we get it in the 1970s? Not because it would have been physically difficult at all, but rather — I speculate — because for cultural reasons it wasn't wanted.]


Modules are the future... and will always be the future.

I think everyone hopes/hoped for a sane and useful version of modules, one that would provide substantial improvements to compilation speed and make things like packaging libraries and dealing with dependencies a lot more sane.

The version of modules that got standardized is anything but that. It's an incredibly convoluted mess that requires an enormous amount of effort for little benefit.


> It's an incredibly convoluted mess that requires an enormous amount of effort for little benefit.

I'd say C++ as a whole is a complete mess. While it's powerful (including OOP), it's complicated and inconsistent language with a lot of historical baggage (40+ years). That's why people and companies still search for (or even already use) viable replacements for C++, such as Rust, Zig, etc.


This is untrue. The MS Office team is using a non-standard MSVC compiler flag that turns standard #include into header units, which treats those header files in a way similar to precompiled header files. This requires no changes to source code, except for some corner cases they mention in that very blog post to work around some compiler quirks.

That is not the same as using modules, which they have not done.


There's nothing non-standard happening there. The compiler is allowed to translate #include -> import. Here's the standardese expressing that: https://eel.is/c%2B%2Bdraft/cpp.include#10.

I do agree, it's not _exactly_ the same as using _named modules_, but header units share an almost identical piece of machinery in the compiler as named modules. This makes the (future planned) transition to named modules a lot easier since we know the underlying machinery works.

The actual blocker for named modules is not MSVC, it's other compilers catching up--which clang and gcc are doing quite quickly!


If you read the article, it's because Tether also issues a gold token called XAUT whose value is pegged to the price of gold:

https://coinmarketcap.com/currencies/tether-gold/


Their whole market cap only equates to ~14 tons at current prices though… and article says they bought 26 tons in the quarter before this quarter where they bought 27 tons…

Step 1, people hand Tether real money at 0% interest.

Step 2, Tether invests in interest bearing assets.

Step 3, Repeat step 2 with profits from step 2.

The choice of gold is just diversification. Kind of odd timing though.


There's no guarantee char8_t is 8 bits either, it's only guaranteed to be at least 8 bits.


> There's no guarantee char8_t is 8 bits either, it's only guaranteed to be at least 8 bits.

Have you read the standard? It says: "The result of sizeof applied to any of the narrow character types is 1." Here, "narrow character types" means char and char8_t. So technically they aren't guaranteed to be 8 bits, but they are guaranteed to be one byte.


Yes, but the byte is not guaranteed to be 8 bits, because on many ancient computers it wasn't.

The poster to whom you have replied has read correctly the standard.


What platforms have char8_t as more than 8 bits?


Well platforms with CHAR_BIT != 8. In c and c++ char and there for byte is atleast 8 bytes not 8 bytes. POSIX does force CHAR_BIT == 8. I think only place is in embeded and that to some DSPs or ASICs like device. So in practice most code will break on those platforms and they are very rare. But they are still technically supported by c and c++ std. Similarly how c still suported non 2's complement arch till 2023.


In C++ you do it the other way around, have a single class that is polymorphic over templates. The name of this technique within C++ is type-erasure (that term means something else outside of C++).

Examples of type erasure in C++ are classes like std::function and std::any, and normally you need to implement the type erasure manually, but there are some library that can automate it to a degree, such as [1], but it's fairly clumsy.

[1] https://www.boost.org/doc/libs/latest/doc/html/boost_typeera...


It's neat this is a thing I guess, but I agree it looks fairly clumsy compared to the Rust answer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: