The neuroscience here hints at something that current AI systems still lack:
a direct, internal positive signal tied to closing a reasoning loop.
Transformers learn almost everything through language-like supervision. Wrong token = small penalty, right token = small reward. That’s great for pattern induction, but it means the model treats a correct chain-of-thought and a beautifully phrased but wrong chain-of-thought as almost the same kind of object—just sequences with slightly different likelihoods.
Human reasoning isn’t like that.
When a logic chain closes cleanly, the brain fires a strong internal reward. That “Aha” isn’t just emotion; it’s an endogenous learning signal saying: this structure is valid, keep this, reuse this. It’s effectively a structural correctness reward, orthogonal to surface language.
If AI ever gets a similar mechanism — a way to mark “self-consistent causal closure” as positively rewarded — we might finally bridge the gap between language-trained reasoning and true general learning. It would matter for:
fast abstraction formation
reliable logical inference
discovering new concepts rather than remixing old ones
Backprop gives us gradient-based correction, but it’s mostly negative feedback. There’s no analogue of the brain’s “internal positive jolt” when a new idea snaps together.
If AGI needs general learning, maybe the missing piece isn’t more scale — it’s this reward for closure.
"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."
Note the subject.
So yes, the First Amendment is okay with any censoring of information by a foreign government or company.
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."
I don't see any mention of government there. Why limit the interpretation of free speech to the US constitution? How do you reconcile ideologically or philosophically that it is not ok for US Congress to limit speech but ok for other powerful entities?
Exactly, that is how Chinese gov take control of your mainstream media and social media, it's much cheaper easier than building aircraft carriers, and more effective.
Your McCarthyist fear and paranoia and the President's cynical manipulation of the same don't trump people's First Amendment rights. The court upheld that principle, thankfully.
Transformers learn almost everything through language-like supervision. Wrong token = small penalty, right token = small reward. That’s great for pattern induction, but it means the model treats a correct chain-of-thought and a beautifully phrased but wrong chain-of-thought as almost the same kind of object—just sequences with slightly different likelihoods.
Human reasoning isn’t like that. When a logic chain closes cleanly, the brain fires a strong internal reward. That “Aha” isn’t just emotion; it’s an endogenous learning signal saying: this structure is valid, keep this, reuse this. It’s effectively a structural correctness reward, orthogonal to surface language.
If AI ever gets a similar mechanism — a way to mark “self-consistent causal closure” as positively rewarded — we might finally bridge the gap between language-trained reasoning and true general learning. It would matter for:
fast abstraction formation
reliable logical inference
discovering new concepts rather than remixing old ones
Backprop gives us gradient-based correction, but it’s mostly negative feedback. There’s no analogue of the brain’s “internal positive jolt” when a new idea snaps together.
If AGI needs general learning, maybe the missing piece isn’t more scale — it’s this reward for closure.