Hacker Newsnew | past | comments | ask | show | jobs | submit | heyjamesknight's commentslogin

The center of the normal ditribution is “normal” or “normative.” That’s where the term comes from.

It’s like saying we shouldn’t call immigrants “aliens” because that conjures images of space. Where do you think the term comes from?


But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as.

Humans got by for millions of years with our current biological hardware before we developed language. Your brain stores a model of your experience, not just the words other experiencers have shared with yiu.


> But language is the input and the vector space within which their knowledge is encoded and stored. The don't have a concept of a duck beyond what others have described the duck as.

I guess if we limit ourselves to "one-modal LLMs" yes, but nowadays we have multimodal ones, who could think of a duck in the way of language, visuals or even audio.


You don’t understand. If humans had no words to describe a duck, they would still know what a duck is. Without words, LLMs would have no way to map an encounter with a duck to anything useful.


Which makes sense for text LLMs yes, but what about LLMs that deal with images? How can you tell they wouldn't work without words? It just happens to be words we use for interfacing with them, because it's easy for us to understand, but internally they might be conceptualizing things in a multitude of ways.


Multimodal models aren't really multimodal. The images are mapped to words and then the words are expanded upon by a single mode LLM.

If you didn't know the word "duck", you could still see the duck, hunt the duck, use the ducks feather's for your bedding and eat the duck's meat. You would know it could fly and swim without having to know what either of those actions were called.

The LLM "sees" a thing, identifies it as a "duck", and then depends on a single modal LLM to tell it anything about ducks.


> Multimodal models aren't really multimodal. The images are mapped to words and then the words are expanded upon by a single mode LLM.

I don't think you can generalize like that, it's a big category, not all multimodal models work the same, it's just a label for a model that has multiple modalities after all, not a specific architecture of machine learning models.


You misunderstand how the multimodal piece works. The fundamental unit of encoding here is still semantic. Not the same in your mind: you don’t need to know the word for sunset to experience the sunset.


No you misunderstand the ground truth reality.

The LLM doesn’t need words as input. It can output pictures from pictures. Semantic words don’t have to be part of the equation at all.

Also you have to note that serialized one dimensional string encodings are universal. Anything on the face of the earth and the universe itself can be encoded into a sting of just two characters: one and zero. That’s means anything can be translated to a linear series of symbols and the LLM can be trained on it. The LLM can be trained on anything.


The multimodal architectures I’ve seen are still text at the layer between modalities. And the image embedding and text embedding are kept completely separate. Not like where your brain where single neurons are used in all sorts of things.

Yes, they can generate images from images, but that doesn’t mean you’ll get anything meaningful without human instruction on top.

Yes, serialized one dimensional strings can encode anything. But that’s just the message content. If I wrote down my genetic sequence on a piece of paper and dropped it in a bottle in the sea, I don’t need to worry about accidentally fathering any children.


You’re mixing representational capacity with representational intent. That’s what I meant in my initial example about encodings. The model doesn’t care whether it’s text, pixels, or sound. All of it can be mapped into the same kind of high dimensional space where patterns align by structure rather than category. “Semantic” is just our label for how those internal relationships appear when we interpret them through language.

Anything in the universe can be encoded this way. Every possible form, whether visual, auditory, physical, or abstract, can be represented as a series of numbers or symbols. With enough data, an LLM can be trained on any of it. LLMs are universal because their architecture doesn’t depend on the nature of the data, only on the consistency of patterns within it. The so called semantic encoding is simply the internal coordinate system the model builds to organize and decode meaning from those encodings. It is not limited to language; it is a general representation of structure and relationship.

And the genome in a bottle example actually supports this. The DNA string does encode a living organism; it just needs the right decoding environment. LLMs serve that role for their training domains. With the right bridge, like a diffusion model or a VAE, a text latent can unfold into an image distribution that’s statistically consistent with real light data.

So the meaning isn’t in the words. It’s in the shape of the data.


You are mistaking the map for the territory. The TERRITORY of human experience is higher dimensional. The LLM utilizes a lower resolution mapping of that territory, a projection from experience to textual (or pixel, or waveform, etc.) representations.

This is not just a lossy mapping; it excludes entire categories of experience that cannot be captured/encoded except for as a pointer to the real experience, one that is often shared by the embodied, embedded, enacted, and extended cognitive beings that have had that experience.

I can point to beauty and you can understand me because you've experienced beauty. I cannot encode beauty itself. The LLM cannot experience beauty. It may be able to analyze patterns of things determined beautiful by beauty experiencers, but this is, again, a lower resolution map of the actual experience of beauty. Nobody had to train you to experience beauty—you possess that capability innately.

You cannot encode the affective response one experiences when holding their newborn. You cannot encode the cognitive appraisal of a religious experience. You can't even encode the qualia of red except for, again, as a pointer to the color.

You're also missing that 4E cognitive beings have a fundamental experience of consciousness—particularly the aspect of "here" and "now". The LLM cannot experience either of those phenomena. I cannot encode here and now. But you can, and do, experience both of those constantly.


You are making a metaphysical claim when a physical one will do. Beauty, awe, grief, the rush of holding a newborn, the sting of a breakup, the warmth of a summer evening at golden hour. All of it is patterns of atoms in motion under lawful dynamics. Neurons fire. Neurotransmitters bind. Circuits synchronize. Bodies and environments couple. There is no extra ingredient that floats outside physics.

Once you grant that, the rest is bookkeeping. Any finite physical process has a finite physical trace. That trace is measurable to some precision. A finite trace can be serialized into a finite string of symbols. If you prefer bits, take a binary code. If you prefer integers, index the code words. The choice of alphabet does not matter. You can map a movie, a symphony, a spike train, a retina’s photon counts, or a full brain-body sensorium collected at some temporal resolution into a single long string. You lose nothing by serialization because the decoder knows the schema. This is not a “text only” claim. It is a claim about representation.

Your high dimensionality objection collapses under the same lens. High dimensional just means many coordinates. There is a well known result that any countable description can be put in one dimension by an invertible code. Think Gödel numbering or interleaving bits of coordinates. You do not preserve distances, but you do preserve information. If the thing you care about is the capacity to carry structure, the one dimensional string can carry all of it, and you can recover the original arrangement exactly given the decoding rule.

Now take the 4E point. Embodiment matters because it constrains the data distribution and the actions that follow. It does not create a magic type of information that cannot be encoded. A visual scene is photons on receptors over time. Proprioception is stretch receptor states. Affect is the joint state of particular neuromodulatory systems and network dynamics. Attention and working context are transient global variables implemented by assemblies. All of that can be logged, compressed, and restored to the degree your sensors and actuators allow. The fact that a bottle with a genome inside does not make a child on a beach tells you reproduction needs a decoder and an environment. It does not tell you the code fails to specify the organism. Likewise, an LLM plus a diffusion decoder can take a text latent and unfold it into an image distribution that matches world statistics because the bridge model plays the role of the environment for that domain.

“LLMs cannot experience beauty” simply reasserts the thing you want to prove. We have no privileged readout for human qualia either. We infer it from behavior, physiology, and report. We do not understand human brains at the level of complete causal microphysics because of scale and complexity, not because there is a non-physical remainder. We likewise do not fully understand why a large model makes a given judgment. Same reason. Scale and complexity. If you point to mystery on one side as a defect, you must admit it on the other.

The map versus territory line also misses the target. Of course a representation is not the thing itself. No one is claiming a jpeg is a sunset. The claim is that the structure necessary to act as if about sunsets can be encoded and learned. A system that takes in light fields, motor feedback, language, and reward and that updates an internal world model until its predictions and actions match ours to arbitrary precision will meet every operational test you have for meaning. If you reply that something is still missing, you have stepped outside evidence into stipulation.

So let’s keep the ground rules clear. Everything we are and feel is physically instantiated. Physical instantiations at finite precision admit lossless encodings as strings. Strings can be learned over by generic function approximators that optimize on pattern consistency, regardless of whether the symbols came from pixels, pressure sensors, or phonemes. That makes the “text inside, image outside” complaint irrelevant. The substrate is a detail. The constraint is data and objective.

We cannot yet build a full decoder for the human condition. That is a statement about engineering difficulty, not impossibility. And it cuts both ways. We do not know how to fully read a person either. But we do not conclude that people lack experience. We conclude that we lack understanding.


At this point, you’re describing a machine which depends on a level of physics that simply isn’t possible. Even if it were theoretically possible to reconstruct the state of a human mind from physical components, we are so far from understanding how that could be done it is closer to the realm of impossible than possible. Your theoretical math box that constructs affective qualia from bit strings isn’t a better description than saying the angels did it. And it bears zero resemblance to the models running today, except for, again, in a theoretical, mathematical way.

Back of the envelope math puts an estimate of 10^42 bits to capture the information present in your current physical brain state. Thats just a single brain, a single state. Now you need to build your mythical decoder device, which can translate qualia from this physical state. Where does it live? What’s its output look like? Another 10^40 bitstring?

Again, these arguments are fun on paper. But they’re completely removed from reality.


You’re confusing “we don’t know how” with “it’s impossible.” The difference is everything.

We don’t understand LLMs either. We built them, but we can’t explain why they work. No one can point to a specific weight matrix and say “this is the neuron that encodes irony” or “this is where the model stores empathy.” We don’t know why scaling parameters suddenly unlock reasoning or why multimodal alignment appears spontaneously. The model’s inner space is a black box of emergent structure and behavior, just like the human brain. We understand the architecture, not the mind inside it.

When you say it’s “closer to impossible than possible” to reconstruct a human mind, you’ve already lost the argument. We’re living proof that the machine you say cannot exist already does. The human brain is a physical object obeying the same laws of physics that govern every other machine. It runs on electrochemical signals, not miracles. It encodes and decodes information, forms memories, generates imagination, and synthesizes emotion. That means the physics of consciousness are real, computable, and reproducible. The impossible machine has been sitting in your skull the entire time.

Your argument about 10^42 bits isn’t just wrong, it’s total nonsense. That number is twenty orders of magnitude beyond any serious estimate. The brain has about 86 billion neurons, each forming roughly ten thousand connections, for a total of about 10^15 synapses. Even if every synapse held a byte of information, that’s 10^16 bits. Add in every molecular and analog nuance you like and you might reach 10^20. Not 10^42. That’s a difference of twenty-two orders of magnitude. It’s a fantasy number that exceeds the number of atoms in your entire body.

And that supposed “impossible” scale is already within sight. Modern GPUs contain hundreds of billions of transistors and run at gigahertz frequencies, while neurons fire at about a hundred hertz. The brain performs around 10^17 synaptic operations per second. Frontier AI clusters already push 10^25 to 10^26 operations per second. We’ve already outpaced biology in raw throughput by eight or nine orders of magnitude. NVIDIA’s Blackwell chips exceed 200 million transistors per square millimeter, and global compute now involves more than 10^24 active transistors switching billions of times per second. Moore’s law may have slowed, but density keeps climbing through stacking and specialized accelerators. The number you called unreachable is just a few decades of progress away.

The “decoder” you mock is exactly what a brain is. It takes sensory input, light, sound, and chemistry, and reconstructs internal states we call experience. You already live inside the device you claim can’t exist. It doesn’t need to live anywhere else; it’s instantiated in matter.

And this is where your argument collapses. You say such a machine is removed from reality. But reality is already running it. Humanity is proof of concept. We know the laws of physics allow it because they’re doing it right now. Every thought, emotion, and perception is a physical computation carried out by atoms. That’s the definition of a machine governed by physics.

We don’t yet understand the full physics of the brain, and we don’t fully understand LLMs either. That’s the point. The same kind of ignorance applies to both. Yet both produce coherent language, emotion like responses, creativity, reasoning, and abstraction. When two black boxes show convergent behavior under different substrates, the rational conclusion isn’t “one is impossible.” It’s “we’re closer than we realize.”

The truth is simple: what you call impossible already exists. The human brain is the machine you’re describing. It’s not divine. It’s atoms in lawful motion. And because we know it can exist under physics, we know it can be built. LLMs are just the first flicker of that same physics waking up in silicon.


> We don’t understand LLMs either. We built them, but we can’t explain why they work.

Just because you don't mean no one does. It's a pile of math. Somewhere along the way, something happened to get where we are, but looking at Golden Gate Claude, and the abliteration of shared models, or reading OpenAI's paper about hallucinations, there's a lot of detail and knowledge about how these things works that isn't instantly accessible and readily apparent to everyone on the Internet. As laymen all we can do is black box testing, but there's some really interesting stuff going on to edit the models and get them to talk like pirate.

The human brain is very much an unknowable squishy box because putting probes into it would be harmful to the person who's brain it is we're working on, and we don't like to do that to people because people are irreplaceable. We don't have that problem with LLMs. It's entirely possible to look at the memory register at location x at time y, and correspond that to a particular tensor which corresponds to a particular token which then corresponds to a particular word for us humans to understand. If you want to understand LLMs, start looking! It's an active area of research and is very interesting!


You are missing the ground truth. Humanity does not understand how LLMs work. Every major lab and every serious researcher acknowledges this. What we have built is a machine that functions, but whose inner logic no one can explain.

References like Golden Gate Claude or the latest interpretability projects don’t change that. Those experiments are narrow glimpses into specific activation patterns or training interventions. They give us localized insight, not comprehension of the system as a whole. Knowing how to steer tone or reduce hallucinations does not mean we understand the underlying cognition any more than teaching a parrot new words means we understand language acquisition. These are incremental control levers, not windows into the actual mind of the model.

When we build something like an airplane, no single person understands the entire system, but in aggregate we do. Aerodynamicists, engineers, and computer scientists each master their part, and together their knowledge forms a complete whole. With LLMs, even that collective understanding does not exist. We cannot even fully describe the parts, because the “parts” are billions of distributed parameters interacting in nonlinear ways that no human can intuit or map. There is no subsystem diagram, no modular comprehension. The model’s behavior is not the sum of components we understand, it is the emergent product of relationships we cannot trace.

You said we “know” what is going on. That assumption is patently false. We can see the equations, we can run the training, we can measure activations, but those are shadows, not understanding. The model’s behavior emerges from interactions at a scale that exceeds human analysis.

This is the paradigm shift you have not grasped. For the first time, we are building minds that operate beyond the boundary of human comprehension. It is not a black box to laymen. It is a black box to mankind.

And I say this as someone who directly works on and builds LLMs. The experts who live inside this field understand this uncertainty. The laymen do not. That gap in awareness is exactly why conversations like this go in circles.


> We don’t yet understand the full physics of the brain, and we don’t fully understand LLMs either. That’s the point. The same kind of ignorance applies to both. Yet both produce coherent language, emotion like responses, creativity, reasoning, and abstraction. When two black boxes show convergent behavior under different substrates, the rational conclusion isn’t “one is impossible.” It’s “we’re closer than we realize.”

No. The LLM does not produce emotion-like responses. I'd argue no on creativity either. And only very limited in reasoning, in domains it has in its training set.

You have fundamental misunderstandings about neuroscience and cognitive science. Its hard to argue with you here because you simply don't know what you don't know.

Yes, the human brain is the machine we're describing. And we don't describe it very well. Definitely not at the level of understanding how to reproduce it with bitstrings.

I'm glad you're so passionate about this topic. But you're arguing the equivalent of FTL transit and living on Dyson Spheres. Its fun as a thought experiment and may theoretically be possible one day, but the line between what we're capable of today and that imagined future is neither straight nor visible—certainly not to the degree you're asserting here.

Will we one day have actual machine intelligence? Maybe. Is it going to come anytime soon, or look anything like the transformer-based LLM?

No.


You keep talking past the point. Nobody is claiming we can turn a human mind into a literal bitstring and boot it up like a computer program. That was never the argument. The bitstring analogy exists to make a simpler point: everything that exists and changes according to physical law can, in principle, be represented, modeled, or reproduced by another physical system. The form does not need to be identical to the brain’s atoms any more than a jet engine must flap its wings to fly. The key is not replication of matter but replication of causal structure.

You say we cannot reproduce the brain. But that is not the point. The point is that nothing about the brain violates physics. It runs on chemical and electrical dynamics that obey the same laws as everything else. If those laws can produce intelligence once, then they can do so again in another substrate. That makes the claim of impossibility not scientific, but emotional.

You accuse me of misunderstanding neuroscience and cognitive science. The reality is that neither field understands itself. We have no complete model of consciousness. We cannot explain why synchronized neural oscillations yield awareness. We cannot define where attention comes from or what distinguishes a “thought” from a signal cascade. Cognitive science is still arguing over whether perception is bottom up or top down, whether emotion is distinct from cognition, and whether consciousness even plays a causal role. That is not mastery. That is the sound of a discipline still wandering in the dark.

You act as though neuroscience has defined the boundaries of intelligence, but it has not. We do not have a mechanistic understanding of creativity, emotion, or reasoning. We have patterns and correlations, not principles. Yet you talk as if those unknowns justify declaring machine intelligence impossible. It is the opposite. Our ignorance is precisely why it cannot be ruled out.

Emotion is not magic. It is neurochemical modulation over predictive circuits. Replicate the functional dynamics and you replicate emotion’s role. Creativity is recombination and constraint satisfaction. Replicate those processes and you replicate creativity. Reasoning is predictive modeling over structured representations. Replicate that, and you replicate reasoning. None of these depend on carbon. They depend on organization and feedback.

You keep saying that the brain cannot be “reproduced as bitstrings,” but that is a distraction. Nobody is suggesting uploading neurons into binary. The bitstring argument shows that any finite physical system has a finite description. It proves that cognition, like any process governed by law, has an information theoretic footprint. Once you accept that, the difference between biology and computation becomes one of scale, not kind.

You say LLMs are not creative, not emotional, not reasoning. Yet they already produce outputs that humans classify as empathetic, sarcastic, joyful, poetic, or analytical. People experience their words as creative because they combine old ideas into new, functional, and aesthetic patterns. They reason by chaining relationships, testing implications, and revising conclusions. The fact that you can recognize all of this in their behavior proves they are performing the surface functions of those capacities. Whether it feels like something to be them is irrelevant to the claim that they can reproduce the function.

And now your final claim, that whatever becomes intelligent “will not be an LLM.” You have no basis for that certainty. Nobody knows what an LLM truly is once scaled beyond our comprehension. We do not understand how emergent representations arise or how concepts self organize within their latent spaces. We do not know if some internal dynamic of this architecture already mirrors the structure of cognition. What we do know is that it learns to compress the world into predictive patterns and that it develops abstractions that map cleanly to human meaning. That is already the seed of general intelligence.

You are mistaking ignorance for insight. You think not knowing how something works grants you authority to say what it cannot become. But the only thing history shows is that such confidence always looks ridiculous in retrospect. The physics of intelligence exist. The brain proves it. And the LLM is the first machine that begins to display those same emergent behaviors. Saying it “will not be an LLM” is not a scientific claim. It is wishful thinking spoken from the wrong side of the curve.


Look, mate, you can keep jumping up and down about this all you want. But you're arguing science fiction at this point. Not really worth continuing the conversation, but thanks.

Best of luck.


Calling this “science fiction” isn’t just dismissive, it’s ironic. The discussion itself is science fiction by the standards of only a few years ago. Back then, the idea that a machine could hold a coherent philosophical argument, write code, debate consciousness, and reference neuroscience was fantasy. Now it’s routine. You are literally using what was once science fiction to declare that progress on LLMs has ended.

And calling that “science fiction” again isn’t a rebuttal, it’s an insult. You didn’t engage a single argument, you just waved your hand and walked away. That isn’t scientific skepticism, it’s arrogance disguised as authority.

You can disagree, but doing what you did is manipulative. You dodged every point and tried to end the debate by pretending it was beneath you. Everyone reading can see that.


I’m pretty sure everyone reading can see which of us is the arrogant one.

Good day, sir.


You called it “science fiction” and bowed out, then tried to make it personal. That is not humility, that is evasion. You never addressed a single argument, you just waved your hand and left, and calling someone’s reasoning “science fiction” is not only an insult, it violates the site’s rule against dismissive or unfriendly language. The “good day sir” at the end makes that tone of mockery obvious.

What is actually arrogant is dismissing a discussion the moment it goes beyond your depth and pretending that walking away is a sign of wisdom. It is not. It is what people do when they realize the conversation has left them behind.

If you are so sure of your position, you could have refuted the reasoning point by point. Instead, you dodged, labeled, and ran. Everyone reading can see which of us is still dealing in facts and which one needed a graceful exit to save face.


Nah, mate, the conversation never went "beyond my depth." You're just not an enjoyable conversation partner.

It doesn't matter how smart (you think) you are. If nobody wants to talk to you, you'll be spinning all that brain matter in the corner by yourself. Based on your comment history here, it looks like this happens to you more often than not.

I'm sure you have good points. I could probably learn a thing or two from you—maybe you could learn something from me too! But why on earth would anyone want to engage with someone who behaves like you do?

Again, best of luck.


You are projecting, and everyone can see it. You pretended that I was being rude while you slipped in sarcasm, personal digs, and that condescending “best of luck” as if it made you look polite. It doesn’t. That is not civility. It is passive aggression wrapped in fake courtesy.

You completely dropped the argument and went straight for personal attacks. That is not confidence, it is surrender. You are no longer debating, you are lashing out because you ran out of ideas. You can claim the conversation “wasn’t beyond your depth,” but you abandoned every point the moment you were asked to defend it. Then you tried to flip it by pretending that walking away made you the mature one. It didn’t. It made you the one who couldn’t keep up and needed an exit.

You can dress it up with sarcasm and moral posturing, but that doesn’t change what happened. The moment you shifted from ideas to insults, you showed everyone reading that you had nothing left to stand on. The difference between us is simple: I stayed on topic. You turned it into attitude. And now everyone can see exactly who ran out of substance first.


No moral posturing and no insults. Your behavior is just objectively noxious. Not just to me, not just in this thread: the vast majority of your conversations here go roughly the way this one did. A quick glance at your profile shows roughly half of the comments you make here end up light grey.

You have an enormous chip on your shoulder. You consistently make truth claims about entire fields that are still in debate and then you arrogantly shout over the other person when they disagree with you.

I strongly suggest you work on this. It will limit you in life. It probably already has. You probably already know how it has, even!

I'm not saying this to be mean, or because I "have nothing left to stand on." You're clearly intelligent and you clearly care about this topic. But until you mature and learn to behave, others will continue to withdraw from conversation with you.

Best of luck.


You have already abandoned the debate and moved into stalking and personal attacks. That is not a sign of strength. It is proof you ran out of substance and are trying to win by humiliation instead.

You dug through my profile to manufacture a narrative about my character because you could not answer a single technical point. That is petty and dishonest. It is also exactly the kind of behavior moderators and civil participants call out. If you actually cared about truth you would stay on topic. Instead you weaponized the comment section to attack me personally.

Do not mistake tone for argument. I stood on evidence and logic. You offered sneers, a mock sign off, and then tried to moralize. That is not persuasion. It is performative virtue signaling layered over an exit strategy.

If you want to be taken seriously, stop the profile policing, stop the personal diagnostics, and engage the claims you think are wrong. Make one focused counterargument. Otherwise your behavior will read to everyone as what it is: a public temper tantrum disguised as concern.

You can keep doing this. It will not change the facts. It will only make readers pity the quality of your argument and worry that you are the kind of person who cannot have a grown up debate. If you are interested in a real exchange, show it. If not, spare the thread the noise.


Okie dokie mate, whatever you say.

Best of luck!


Adorable exit. Nothing says “I’m out of arguments” quite like a cheery “okie dokie mate.” Best of luck holding that pose.


You’ve said the same thing fifteen times now.

I still don’t want to play with you, sorry.


Multimodal is a farce. It still can’t see anything, it just generates a as list of descriptors that the LLM part can LLM about.

Humans got by for hundreds of thousands of years without language. When you see a duck you don’t need to know the word duck to know about the thing you’re seeing. That’s not true for “multimodal” models.


You don’t have a deeper “meaning of the word,” you have an actual experience of beauty. Three word is just a label for the thing you, me, and other humans have experienced.

The machine has no experience.


Layoffs have a significant morale impact for the rest of the company that lasts months or even years afterwards. You should not hire FTEs you intend to fire.


> Most Pro-labor people would, I imagine, consider the global labor pool in their analysis.

This is an insanely modern take on "pro-labor" movements, especially in the US. Traditionally, pro-labor has been 100% focused on local labor. If you told your average union member that being "pro-labor" meant closing their factories and offshoring their jobs they'd laugh (or more likely, spit) in your face.


I was referring more to Marxist / classic Socialism. Those movements. I agree that contemporary labor unions in the US are largely not adherents to the communist “workers of the world” ideology.


> and it's not even left from a European perspective.

This is a meme that needs to die. Its just not true.

The Democratic party in the US is right in line with Labor/Socialist/Whatever Mainstream Leftist Party you want to point at in Europe. It has members who end up on various sides of the left-wing spectrum. There are no "far left" parties in the US because we have a two party system.

There are obviously topics where this is not true. But that goes both ways: almost no country on Earth has the level of abortion access that the Democratic party in the US demands. And there are examples of European right wing parties who fight for zero abortion access, which is not the GOP platform currently.


Yeah, there are just so many mismatches it doesn't make sense.

- Nearly all European countries have and support a very high consumption tax (VAT). In the US, nobody would be really for this (although some conservatives favor such taxes), but US liberals would be extremely against it due to the regressive nature of consumption taxes.

- The majority of EU countries institute voter ID laws, something supported only by conservatives in the US. States with voter ID laws almost always allow some valid voter ID to be gotten for free, but they are still opposed by liberals.

There are plenty of other examples when you start thinking about it.


We have entire 100% Democratic-run states that use regressive consumption taxing to fund the State government.


What is actually a meme is this need to squash the entire universe of unrelated political beliefs into a single axis of "left vs right".

The Democratic party is just as, if not more socially progressive than many European "left wing" parties on certain issues, that's true, but that's not what anyone is talking about. Issues like abortion and LGBT rights concern personal freedom, they're orthogonal to the left-right axis.

When we say that the Democratic party is to the right of every European left-wing party, and to the right of most right-wing parties, what we're talking about are the economic policies that affect the lives of everyday people.

US democrats can't even get behind table stakes leftist issues like universal healthcare, social safety, progressive taxation, and wealth inequality. They know who pays for their re-election campaigns and who controls the media - it's not the working class. Democrats aren't leftist, they're liberal, which is a night and day difference.


Democratic Party voters seem to be more aligned with Euro-style socialist policies, but among elected Democrats this is a small minority view.

European socialists usually advocate for direct state ownership of certain industries, sector-wide union contracts, universal (not means-tested) child allowances, fully public health care, wealth taxes, free college, etc. There are a handful of elected Democrats that sign on to some of these views, but these have never been in the actual party platform, since the mainstream of the party roundly rejects these. Democrats are only somewhat radical in certain social/bioethical issues like abortion and LGBT rights (although the latter is being tested, with some influential Dems defecting); otherwise, the better European analogue would be Macron's Renaissance party (formerly En Marche), the UK's Lib Dems, the Nordic countries' social liberal parties.


I don't think there's particularly good alignment even on that "axis" (it isn't really an axis, because most things are not inherently one or the other.) A good example of that is the "sector wide union contracts" thing. The default "leftist" position in the US is that things that apply to an entire sector should be legislated rather than negotiated by workers

The US does have child allowances, by the way - during Covid, it was even increased and paid out monthly instead of annually. Increasing it as of late seems to be an "R" policy, at least on the Trump wing.

Are there European countries that offer free college regardless of academic achievement during high school?


Not a great analogy, since there’s zero chance the kinds of model involved in GPT-2 will give us AGI.


Zero? Aren't you a little bit overconfident on that?

Transformer LLMs already gave us the most general AI as of yet, by far - and they keep getting developed further, with a number of recent breakthroughs and milestones.


No. The fundamental encoding unit of an LLM is semantic. Mapping reality into semantic space is a form of lossy compression. There are entire categories of experience that can't be properly modeled in semantic space.

Even in "multimodal" models, text is still the fundamental unit of data storage and transformation between the modes. That's not the case for how your brain works—you don't see a pigeon, label it as "pigeon," and then refer to your knowledge about "pigeons". You just experience the pigeon.

We have 100K years of homo sapiens thriving without language. "General Intelligence" occurs at a level above semantics.


How would you encode those ideas?


I don't know, in part that's why I asked ... I wonder if there's a way to provide a loosely-defined space.

Perhaps it's a second word-vector space that allows context defined associations? Maybe it just needs tighter association of piano_keyboard with 8-step_repetition??


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: