We just aged out of this as our youngest child is now 11, but I can affirm that magnatiles are fantastic and fun - and that is coming from someone who lionizes legos and considers them the ne plus ultra of toys for children.
That being said ...
We got a lot of mileage - many good years of use from male and female children - out of "Snap Circuits":
A very, very cool building ecosystem with easy to build and understand recipes - we built a working FM radio, for instance. Not at all fussy or fragile.
My children are not particularly "STEMy" but they all enjoyed breaking out the "circuit kit".
If you actually work, the amount of work you do is absurdly more than the amount of work most others do, and a lot of the time, both the high and low productivity people assume everyone just does as much as they do, in both directions.
A lot of people are oblivious to Zipf distributions in effort and output, and if you ever catch on to it as a productive person, it really reframes ideas about fairness and policy and good or bad management.
It also means that you can recognize a good team, and when a bunch of high performers are pushing and supporting eachother and being held to account out in the open, amazing things happen that just make other workplaces look ridiculous.
My hope for AI is that instead of 20% of the humans doing 80% of the work, you end up with force multipliers, and a ramping up, so that more workplaces look like high function teams, making everything more fair and engaging and productive, but i suspect once people get better with AI, at least up to the point of AGI, is we're going to see the same distribution but 10x or 50x the productivity.
The "natural inverse" relationship between "address-of" and indirect addressing is only partial.
You can apply the "*" operator as many times you want, but applying "address-of" twice is meaningless.
Moreover, in complex expressions it is common to mix the indirection operator with array indexing and with structure member selection, and all these 3 postfix operators can appear an unlimited number of times in an expression.
Writing such addressing expressions in C is extremely cumbersome, because they require a great number of parentheses levels and it is still difficult to see which is the order in which they are applied.
With a postfix indirection operator no parentheses are needed and all addressing operators are executed in the order in which they are written.
So it is beyond reasonable doubt that a prefix "*" is a mistake.
The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "*++p" and "*p++" to have the desired order of evaluation.
There is no other use case where a prefix "*" simplifies anything and for the postfix and prefix increment and decrement it would have been possible to find other ways to avoid parentheses and even if they were used with parentheses that would still have been simpler than when you have to mix "*" with array indexing and with structure member selection. Moreover, the use of "++" and "--" with pointers was only a workaround for a dumb compiler, which could not determine by itself whether it should access an array using indices or pointers. Normally there should be no need to expose such an implementation detail in a high-level language, the compiler should choose the addressing modes that are optimal for the target CPU, not the programmer. On some CPUs, including the Intel/AMD CPUs, accessing arrays by incrementing pointers, like in the old C programs, is usually worse than accessing the arrays through indices (because on such CPUs the loop counter can be reused as an index register, regardless of the order in which the array is accessed, including for accessing multiple arrays, avoiding the use of extra registers and reducing the number of executed instructions).
With a postfix "*", the operator "->" would have been superfluous. It has been added to C only to avoid some of the most frequent cases when a prefix "*" leads to ugly syntax.
Classical and neoclassical economics tells us that people always spend their entire paycheck on consumer goods (consumption) or claims to future consumer goods (savings). There is no money left over at the end of the month. Producers are notified in advance what to produce. If a worker happens to have $5 left at the end of the month and he wants to buy a car, he will use those $5 towards a non-refundable deposit, even if the car costs tens of thousands of dollars, and contractually obligate himself to spend the remaining money. No money is ever carried over from one period to another, since money is neutral and just a veil.
Meanwhile the observation in the real world is that people often hold onto enough money to make the full purchase. From a theoretical perspective it means they have found a third option: delay making decisions, which is neither saving nor consumption. This means producers are not notified of what to produce until the very moment of the purchase, which means producers have to engage in speculative production ahead of time with no guarantee of a sale. They have to hold inventories and possibly throw away unsold inventories (because no seller wants them). This holding of inventories can manifest itself in the form of idle capital and unemployment as well.
What Silvio Gesell discovered on his farming venture that this delayed decision making is incredibly wasteful, because most products (especially produce) are perishable/non-durable at long time scales, this means that waiting produces waste per unit of time on part of the speculative producers and give the seller of perishables a weaker negotiation position compared to the buyer, who is an implicit seller of money, a non-perishable good. This means buyers or generally people with money are structurally advantaged compared to those who don't. This leads to the conclusion that either money should perish as well, or a more modern interpretation: negative interest is a reflection of rising entropy.
I've seen neoclassicals reject this theory with pretty flismy reasoning, mostly based around the idea that people don't hold cash or positive balances on their account.
Now this theory isn't perfect by any means, but it is pretty interesting and the more you dig, the more it feels like it is pointing towards the correct direction. Meanwhile Keynes proposed a different theory, which is based on the liquidity of assets rather than its durability. Money is more liquid, because everyone wants money. This means you can take money and walk to any seller and buy their goods. Meanwhile as a seller, you must have the particular good that the buyer wants. This is a more general theory since it isn't biased exclusively towards money, because there is sort of a hierarchy of liquidity. Bank account deposits are as liquid or perhaps even more liquid than cash. This means bank account balances can be used for payment instead of cash. Bonds are essentially money with a duration that binds their use, which makes their market value and their nominal value diverge. Stocks are on the extreme end of liquid assets, with wild fluctuations. Meanwhile things like apples are on the lower end of illiquid assets. There is a limit to how many apples a person accepts as "payment" for parting with money.
I've used OCaml a bit and found various issues with it:
* Terrible Windows support. With OCaml 5 it's upgraded to "pretty bad".
* The syntax is hard to parse for humans. Often it turns into a word soup, without any helpful punctuation to tell you what things are. It's like reading a book with no paragraphs, capitalisation or punctuation.
* The syntax isn't recoverable. Sometimes you can add a single character and the error message is essentially "syntax error in these 1000 lines".
* Ocamlfmt is pretty bad. It thinks it is writing prose. It will even put complex `match`es on one line if they fit. Really hurts readability.
* The documentation is super terse. Very few examples.
* OPAM. In theory... I feel like it should be great. But in practice I find it to be incomprehensible, full of surprising behaviours, and also surprisingly buggy. I still can't believe the bug where it can't find `curl` if you're in more than 32 Unix groups.
* Optional type annotation for function signatures throws away a significant benefit of static typing - documentation/understanding and nice error messages.
* Tiny ecosystem. Rust gets flak for its small standard library, but OCaml doesn't even have a built in function to copy files.
* Like all FP languages it has a weird obsession with singly linked lists, which are actually a pretty awful data structure.
It's not all bad though, and I'd definitely take it over C and Python. Definitely wouldn't pick it over Rust though, unless I was really worried about compile times.
What could be good is relational + array model. I have some ideas on https://tablam.org, and building not just the language but the optimizer in tandem I think will be very nice.
In languages like JavaScript, immutable and constant may be theoretically the same thing, but in practice "const" means a variable cannot be reassigned, while "immutable" means a value cannot be mutated in place.
They are very, very different semantically, because const is always local. Declaring something const has no effect on what happens with the value bound to a const variable anywhere else in the program. Whereas, immutability is a global property: An immutable array, for example, can be passed around and it will always be immutable.
JS has always hade 'freeze' as a kind of runtime immutability, and tooling like TS can provide for readonly types that provide immutability guarantees at compile time.
"For reasons like this, "in-context learning" is not an accurate term for transformers. It's projection and storage, nothing is learnt.
This new paper has attracted a lot of interest, and it's nice that it proves things formally and empirically, but it looks like people are surprised by it, even though it was clear."
> Context switching is virtually free, comparable to a function call.
If you’re counting that low, then you need to count carefully.
A coroutine switch, however well implemented, inevitably breaks the branch predictor’s idea of your return stack, but the effect of mispredicted returns will be smeared over the target coroutine’s execution rather than concentrated at the point of the switch. (Similar issues exist with e.g. measuring the effect of blowing the cache on a CPU migration.) I’m actually not sure if Zig’s async design even uses hardware call/return pairs when a (monomorphized-as-)async function calls another one, or if every return just gets translated to an indirect jump. (This option affords what I think is a cleaner design for coroutines with compact frames, but it is much less friendly to the CPU.)
So a foolproof benchmark would require one to compare the total execution time of a (compute-bound) program that constantly switches between (say) two tasks to that of an equivalent program that not only does not switch but (given what little I know about Zig’s “colorless” async) does not run under an async executor(?) at all. Those tasks would also need to yield on a non-trivial call stack each time. Seems quite tricky all in all.
Ada, or at least GNAT, also supports compile-time dimensional analysis (unit checking). I may be biased, because I mostly work with engineering applications, but I still do not understand why in other languages it is delegated to 3rd party libraries.
It's much easier to write ultra low latency code in Zig because doing so requires using local bump-a-pointer allocators, and the whole Zig standard library is built around that (everything that allocates must take the allocator as a param). Rust on the other hand doesn't make custom allocators easy to use, partially because proving they're safe to the borrow checker is tricky.
This isn't what people are talking about, you aren't understanding the problem
With RAII you need to leave everything in an initialized state unless you are being very very careful - which is why MaybeUninit is always surrounded by unsafe
{
Foo f;
}
f must be initialized here, it cannot be left uninitialized
std::vector<T> my_vector(10000);
EVERY element in my_vector must be initialized here, they cannot be left uninitialized, there is no workaround
Even if I just want a std::vector<uint8_t> to use as a buffer, I can't - I need to manually malloc with `(uint8_t)malloc(sizeof(uint8_t)*10000)` and fill that
So what if the API I'm providing needs a std::vector? well, I guess i'm eating the cost of initializing 10000 objects, pull them into cache + thrash them out just to do it all again when I memcpy into it
This is just one example of many
another one:
with raii you need copy construction, operator=, move construction, move operator=. If you have a generic T, then using `=` on T might allocate a huge amount of memory, free a huge amount of memory, or none of the above. in c++ it could execute arbitrary code
If you haven't actually used a language without RAII for an extended period of time then you just shouldn't bother commenting. RAII very clearly has its downsides, you should be able to at least reason about the tradeoffs without assuming your terrible strawman argument represents the other side of the coin accurately
> The use of trampolines requires an executable stack, which is a security risk. To avoid this problem, GCC also supports another strategy: using descriptors for nested functions. Under this model, taking the address of a nested function results in a pointer to a non-executable function descriptor object. Initializing the static chain from the descriptor is handled at indirect call sites.
So, if I understand it right, instead trampoline on executable stack, the pointer to function and data is pushed into the "descriptor", and then there is an indirect call to this. I guess better than exec stack, but still...
> Each of these 'phases' of LLM growth is unlocking a lot more developer productivity, for teams and developers that know how to harness it.
I still find myself incredibly skeptical LLM use is increasing productivity. Because AI reduces cognitive engagement with tasks, it feels to me like AI increases perceptive productivity but actually decreases it in many cases (and this probably compounds as AI-generated code piles up in a codebase, as there isn't an author who can attach context as to why decisions were made).
I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity. In my mind, like TikTok or online dating, AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.
That's not to say that I don't use AI, but I use it primarily as search to see what is out there. If I use it for coding at all, I tend to primarily use it for code review. Even when AI does a good job at implementation of a feature, unless I put in the cognitive engagement I typically put in during code review, its code feels alien to me and I feel uncomfortable merging it (and I employ similar levels of cognitive engagement during code reviews as I do while writing software).
As a designer first turned developers in the early 2000, I beg of you to learn the gestalt.
Frameworks, languages, computers, come and go, but the human body doesn't change and the knowledge I have in design, I carry every day and have barely changed over the years. Sure there are new patterns now... "hamburger buttons" and swiping, but the logic remains the same. Human's don't change quickly. They discover things the same way.
Learn about visual hierarchy, visual rhythm, visual grouping, visual contrast, visual symmetry; the golden rule; the theory of colours etc. Think "subject" first, like in photography. Design for first glance & last glance.
Go beyond "do these align".
Think in the eyes of your user as if it's their first visit, there is no content yet, etc; as well as if it's their 1000th visit; cater for both cases; first and power users.
Understand the gestalt, understand the psychology behind design... Why does bright-red jumps at you, at a visceral level?
Feeling that something feels right is great, but understanding deeply why it feels right is a superpower.
Understand the human brain, its discovery process (how do babies discover the world), "why do westerners look top left first"? And you might innovate in design, instead of designing to not offend anyone; or worst, copying dribbble and other sources because "they spent the money".
Trust me if you can learn React or Kubernetes, you surely can learn the gestalt and understand "the design of everyday things"! That knowledge won't expire on you, you'll start seeing it everywhere and you'll carry it for the rest of your life.
I am always so skeptical of this style of response. Because if it takes hundreds of hours to learn to use something, how can it really be the silver bullet everyone was claiming earlier? Surely they were all in the midst of the 100 hours. And what else could we do if we spent 100 hours learning something? It's a lot of time, a huge investment, all on faith that things will get better.
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
You really don't need anything fancy to implement a queue using SQL. You need a table with a primary id and a "status" field. An "expired" field can be used instead of the "status". We used the latter because it allows easy retries.
1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.
2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.
All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)
One way I found helpful in giving precise parsing errors without causing excessive false errors is to have the parser auto-completing the missing tokens with placeholder error nodes. E.g.
if (cond1 And ) { ... }
The And operator is missing its right operand, but everything else is valid.
The parser knows the And operator needs two arms. It can complete the missing arm with an error node.
fn parseCondition() {
let leftNode = parseBoolExpr()
if (lookaheadToken() is And)
return parseAnd(leftNode)
if (lookaheadToken() is Or)
...
return leftNode
}
fn parseAnd(leftNode) {
consumeToken()
let rightNode = parseBoolExpr()
if (rightNode != null)
return AndNode(leftNode, rightNode)
else
return AndNode(leftNode, ErrorNode("missing operand", location))
}
In this way, the conditional expression is parsed to completion without interruption. Multiple errors can be handled with completion and recorded. At the end it's a matter of traversing the AST to print out all the ErrorNodes.
> Anyone who wants to learn about IQ should Google it
This is bad advice because Google returns poor results for most medical questions, including ones about controversial topics like IQ.
IQ was adopted as a pet cause by hard right wing political theorists, for example one of the authors of the Bell Curve.
When I was in grad school for psych, nobody serious studied it. Occasionally one person was still working on it, and everybody in the department whispered about them being a kook. This was at an elite psych department, it may have been different in smaller departments.
Often times if you see someone posting information about IQ it's either (1) they're selling IQ tests, (2) they're selling services that administer IQ tests, or (3) they align with a political faction that politicizes IQ.
If you want to learn about IQ, the best thing is probably to find a recent review article published by a top tier journal that does not specialize in IQ research.
My take the last time I looked into it was that it helps locate people who have learning disabilities, but it's not great at predicting individual outcomes.
The measure most people intuitively think of is correlation of IQ with success, keeping SES constant and throwing out the lowest range of IQ. That is, you want to know the incremental benefit of having a higher IQ given that you're not suffering from a learning disability. And you also don't want to accidentally measure the obvious impact that having more money gives you more opportunities.
When you make these adjustments it quickly becomes clear that IQ is much messier than people in this thread are claiming. For example, heritability varies by SES. And heritability is generally not what people think it is naively.
A regular "chatting" LLM is a document generator incrementally extending a story about a conversation between a human and a robot... And through that lens, I've been thinking "chain of thought" seems like basically the same thing but with a film noir styling-twist.
The LLM is trained to include an additional layer of "unspoken" text in the document, a source of continuity which substitutes for how the LLM has no other memories or goals to draw from.
"The capital of Assyria? Those were dangerous questions, especially in this kind of town. But rent was due, and the bottle in my drawer was empty. I took the case."
I think there is a big problem with antibiotic overuse in food animals, certainly. Although they can be useful for growth, large livestock farms are environments where drug-resistant infections can spread easily. I try to buy antibiotic-free meats as a rule but that doesn't really stop the damage caused by the meat companies that don't care about it.
As far as human antibiotic use, the flip side are the colleagues who will tell me that they did a thorough workup on a patient, found no indication for antibiotics, told the patient so to the best of their ability, and got dinged on the insurer's survey or castigated on doctor rating sites. I'm of course only hearing the provider's side of the story, but nobody likes to be told no, even when it's the right answer. (Also insert opioids, stimulants and benzodiazepines into this conversation.)
So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.
If calculators were unreliable... Well, we'd be screwed if everyone blindly trusted them and never learned math.
They'd also be a whole lot less useful. Calculators are great because they always do exactly what you tell them. It's the same with compilers, almost: imagine if your C compiler did the right thing 99.9% of the time, but would make inexplicable errors 0.1% of the time, even on code that had previously worked correctly. And then CPython worked 99.9% of the time, except it was compiled by a C compiler working 99.9% of the time, ...
But bringing it back on-topic, in a world where software is AI-generated, and tests are AI-generated (because they're repetitive, and QA is low-status), and user complaints are all fielded by chat-bots (because that's cheaper than outsourcing), I don't see how anyone develops any expertise, or how things keep working.
When I make Apple style presentations (no visual noise, no bullet point lists, one appealing visual / idea on one slide etc and narrating the story instead of showing densely packed info in one slide after another), I can literally see how my audience is really enjoying the presentation, getting the idea, but then constantly management approaches me telling me to use the corporate template, stick to the template, use the template elements, etc.
They just don’t get it. What comprises a good presentation. Even if they themselves enjoy the content while they are in the audience.
Futile.
Edit: Tangential: I am the only one using a MacBook in a company of 700+ coworkers.
That being said ...
We got a lot of mileage - many good years of use from male and female children - out of "Snap Circuits":
https://elenco.com/
A very, very cool building ecosystem with easy to build and understand recipes - we built a working FM radio, for instance. Not at all fussy or fragile.
My children are not particularly "STEMy" but they all enjoyed breaking out the "circuit kit".