Hacker Newsnew | past | comments | ask | show | jobs | submit | Marha01's commentslogin

I don't find it interesting in an artistic way, but I do find it very interesting from an "AI experiment" angle.

I don't get what the "AI experiment" angle here is? The fact that AI can write python code that makes sounds? And if the end product isn't interesting or artistically worthwhile, what is the point?

I have a deep background in music and I think that while the creation was super basic, the way the output was so unconstrained (written by a model fine-tuned for coding), is really interesting. Listen to that last one and tell me it couldn't belong on some tv show. I've had always issues with any ai generated music because of the constraints and the way the output is so derivative. This was different to me.

What's the point if human-made art isn't interesting or artistically worthwhile?

(Most of it isn't.)

Art is on a sliding scale from "Fun study and experiment for the sake of it" to "Expresses something personal" to "Expresses something collective" to "A cultural landmark that invents a completely new expressive language, emotionally and technically."

All of those options are creatively worthwhile. Or maybe none of them are.

Take your pick.


> What's the point if human-made art isn't interesting or artistically worthwhile?

Because it is a human making it, expressing something is always worthwhile to the individual on a personal level. Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least. Which is why a lot of people just find enjoyment in creating art even if its not commercially succesful.

But in this case, the criteria changes for the final product (the music being produced). It is not artistically worthwhile to anyone, not even the creator.

So no, a person with no talent (self claim) using an LLM to create art is much less worthwhile than a human being with no/any talent creating art on their own at all times by default.


>Even if its not "artisticallly worthwhile", the process is rewarding to the participant at the very least

I think that's the point though. What op did was rewarding to themselves, and I found it more enjoyable than a lot of music I've heard that was made by humans. So don't be a gatekeeper on enjoyment.


How am I a gatekeeper? I provided my own opinions; you are free to enjoy what you want or disagree with me. If you want to get into an objective discussion of why you find it enjoyable more than human works or what is art, we can do that but I do not like the personal slights.

I think you're mistaking the .wav as the final product, whereas instead it's really the .html blog post and this discussion.

I was discussing it on the basis of music with the commentator and the actual product. Sure if you want to go all Andy Kaufman then yeah the .html and this discussion is art but I wasn't talking about it in the original context of the conversation.

At least it wrote a song, instead of stably-diffusing static into entire tracks from its training data. I can take those uninteresting notes, plug them into a DAW and build something worthwhile. I can only do this with Suno-generated stems after much faffing about with transposing and fixing rhythms, because Suno doesn't know how to write music, it just creates waveforms.

AI tools are decent at helping with code because they're editing language in a context. AI tools are terrible at helping with art because they are operating on the entirely wrong abstraction layer (in this case, waveforms) instead of the languages humans use to create art, and it's just supremely difficult to add to the context without destroying it.


I just want to know what's in there. It doesn't need to be artistic at all. They put terabytes of data into the training process and I want to know what came through.

Very interesting experiment! I tried something related half a year ago (LLMs writing midi files, musical notation or guitar tabs), but directly creating audio with Python and sine waves is a pretty original approach.

Even with 1 TB of weights (probable size of the largest state of the art models), the network is far too small to contain any significant part of the internet as compressed data, unless you really stretch the definition of data compression.

This sounds very wrong to me.

Take the C4 training dataset for example. The uncompressed, uncleaned, size of the dataset is ~6TB, and contains an exhaustive English language scrape of the public internet from 2019. The cleaned (still uncompressed) dataset is significantly less than 1TB.

I could go on, but, I think it's already pretty obvious that 1TB is more than enough storage to represent a significant portion of the internet.


This would imply that the English internet is not much bigger than 20x the English Wikipedia.

That seems implausible.


> That seems implausible.

Why, exactly?

Refuting facts with "I doubt it, bro" isn't exactly a productive contribution to the conversation..


Because we can count? How could you possibly think that Wikipedia was 5% of the whole Internet? It's just such a bizarrely foolish idea.

A lot of the internet is duplicate data, low quality content, SEO spam etc. I wouldn't be surprised if 1 TB is a significant portion of the high-quality, information-dense part of the internet.

I would be extremely surprised if it was that small.

I was curious about the scale of 1TiB of text. According to WolframAlpha, it's roughly 1.1 trillion characters, which breaks down to 180.2 billion words, 360.5 million pages, or 16.2 billion lines. In terms of professional typing speed, that's about 3800 years of continuous work.

So post-deduplication, I think it's a fair assessment that a significant portion of high-quality text could fit within 1TiB. Tho 'high-quality' is a pretty squishy and subjective term.


Well, a terabyte of text is... quite a lot of text.

This is obviously wrong. There is a bunch of knowledge embedded in those weights, and some of it can be recalled verbatim. So, by virtue of this recall alone, training is a form of lossy data compression.

You can use AI code editor that allows you to use your own API key, so you pay per-token, not a fixed monthly fee. For example Cline or Roo Code.


They all let you do that now, including Claude Code itself. You can choose between pay per token and subscription.

Which means that a sensible way to go about those things is to start with a $20 subscription to get access to the best models, and then look at your extra per-token expenses and whether they justify that $200 monthly.


> a lot of people seem to see LLMs as smarter than themselves

Well, in many cases they might be right..


As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.


LLMs may appear to do well on certain programming tasks on which they are trained intensively, but they are incredibly weak. If you try to use an LLM to generate, for example, a story, you will find that it will make unimaginable mistakes. If you ask an LLM to analyze a conversation from the internet it will misrepresent the positions of the participants, often restating things so that they mean something different or making mistakes about who said what in a way that humans never do. The longer the exchange the more these problems are exacerbated.

We are incredibly far from AGI.


We do have AI systems that write stories [0]. They work. The quality might not be spectacular but if you've ever gone out and spent time reading fanfiction you'd have to agree there are a lot of rather terrible human writers too (bless them). It still hits this issue that if we want LLMs to compete with the best of humanity then they aren't there yet, but that means defining human intelligence as something that most people don't have access to.

> If you ask an LLM to analyze a conversation from the internet it will misrepresent the positions of the participants, often restating things so that they mean something different or making mistakes about who said what in a way that humans never do.

AI transcription & summary seems to be a strong point of the models so I don't know what exactly you're trying to get to with this one. If you have evidence for that I'd actually be quite interested because humans are so bad at representing what other people said on the internet it seems like it should be an easy win for an AI. Humans typically have some wild interpretations of what other people write that cannot be supported from what was written.

[0] https://github.com/google-deepmind/dramatron


I haven't tried Dramatron, but my experience is that it isn't possible to do sensibly. With regard to the second part

>AI transcription & summary seems to be a strong point of the models so I don't know what exactly you're trying to get to with this one. If you have evidence for that I'd actually be quite interested because humans are so bad at representing what other people said on the internet it seems like it should be an easy win for an AI. Humans typically have some wild interpretations of what other people write that cannot be supported from what was written.

Transcription and summarization is indeed fine, but try posting a longer reddit or HN discussion you've been part of into any model of your choice and ask it to analyze it, and you will see severe errors very soon. It will consistently misrepresent the views expressed and it doesn't really matter what model you go for. They can't do it.


I can see why they'd struggle, I'm not sure what you're trying to ask the model to do. What type of analysis are you expecting? If the model is supposed to represent the views expressed that would be a summary. If you aren't asking it for a summary what do you want it to do? Do you literally mean you want the model to perform conversational analysis (ie, https://en.wikipedia.org/wiki/Conversation_analysis#Method)?


Usually I use the format "Analyze the following ...".

For simple discussions this is fine. For complex discussions, especially when people get into conflict-- whether that conflict is really complex or not, problems usually result. The big problems are that the model will misquote or misrepresent views-- attempted paraphrases that actually change the meaning, the ordinary hallucinations etc.

For stories the confusion is much greater. Much of it is due to the basic way LLMs work: stories have dialogue, so if the premise contains people not being able to speak each other's language problems come very soon. I remember asking some recent Microsoft Copilot variant to write some portal scenario-- some guys on vacation to Teneriffe rent a catamaran and end up falling through a hole in the world of ASoIAF and into the seas off Essos, where they obviously have a terrible time, and it kept forgetting that they don't know English.

This is of course not obviously relevant for what Copilot is intended for, but I feel that if you actually try this you will understand how far we are from something like AGI, because if things like OpenAIs or whoever's systems were in fact close, this would be close too. If we were close we'd probably see silly errors too, but it'd be different kinds of errors, things like not telling you the story you want, not ignoring core instructions or failing to understand conversations.


Your points about misquotes and language troubles are very valid and interesting. But a word of caution on your prompt: you’re asking a lot of the word “analyze” here; if the LLM responded that the thread had 15 comments by 10 unique authors, and a total of 2000 characters, I would classify that as a completely satisfactory answer (assuming the figures were correct) based on the query


> Usually I use the format "Analyze the following ...".

It doesn't surprise me that you're getting nonsense, that is an ill-formed request. The AI can't fulfil it because it isn't asking it to do anything. I'm in the same boat as an AI would be, I can't tell what outcome you want. I'd probably interpret it as "summarise this conversation" if someone asked that of me, but you seem to agree that AI are good at summery tasks so that doesn't seem like it would be what you want. If I had my troll hat on I'd give you a frequency analysis of the letters and call it a day which is more passive-aggressive than I'd expect of the AI, they tend to just blather when they get a vague setup. They aren't psychic, it is necessary to give them instructions to carry out.


> We are incredibly far from AGI.

This and we don't actually know what the foundation models are for AGI, we're just assuming LLMs are it.


This seems distant from my experience. Modern LLMs are superb at summarisation, far better than most people.


> there might be a general belief that the median human is not intelligent

This is to deconstruct the question.

I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?

For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.


>knowing why you are doing what you are doing[...] Computers can only do an imitation of this sort of answer. People stand a chance of answering it.

I'm not following. A computer's "why" is a written program, surely that is the most clear expression of its intent you could ask for?


A computer doesn't determine the why, it is programmed to do so. It doesn't determine meaning or value from whatever-it-is.


Did you mean it doesn't set its own goals? Or what did you mean by "determine the why" if not a stack trace of its motivations(which is to say, its programming)? Could you give an example of determinimg meaning or value?


Yes, set its own goals. Here's an example - say you wanted to track your spending, you might create a spreadsheet to do so. The spreadsheet won't write itself. If you want, you could perhaps task an ai to monitor and track spending - but it doesn't care. It is the human that cares/feels/values whatever-it-is. Computers are not that type.

Is your position that humans are pretty mechanistic, and simply playing out their programming, like computers? And that they can provide a stacktrace for what they do?

If so, this is what I was getting at with my initial comment. Most people do not apply their intelligence personally - they are simply playing out the goals that we inserted into them (by parents, society). There are alternative possibilities, but it seems that most people's operational procedures and actions are not something they have considered or actively sought.


>Is your position that humans are [...] simply playing out their programming?

Yes, at least it's what I wanted to drill further into.

Boiled down, I'm interested in hearing where "intelligent" people derive their motivations(I'm in agreement that most people are on ["non-intelligent" if you will] auto-pilot most of the time) if not from outside themselves, in your framework.

When does a goal start being my intelligent own goal? Any impetus for something can be traced back to not-yourself: I might decide to start tracking my spending, but that decision doesn't form out of the void. Maybe I value frugality, but I did not create that value in myself. It was instilled in me by experience, or my peers, etc. I see no way for one to "spontaneously" form a motivation, or if I wanted to take it one step further(into the Buddha's territory), I would have to question who, and where, and what this "self" even is.


Here's a question for you. Imagine a child who was well looked after (fed and loved) but didn't go to school for 12+ years. Now imagine the same person who from the age of 5-6 followed the usual path of 12+ years of schooling. Which person do you imagine would be more fully themselves, the more complete expression of whatever was already inside? If the schooled person did a PhD too (so another 6 years) would that help or hinder them from becoming themselves?

To me, the answer is obvious. Inserting thousands of ideas and patterns of thoughts into a person will be unlikely to help them become a true expression of their nature. If you know gardening, the schooled person is more like a trained tree - grown in a way that suits the farmer - the more tied back the tree is, the less free it is.

As I see it, each individual is unique, with a soul. Each is capable of reaching a full expression of itself, by itself. What I also see is that there are many systems that are intentional manipulations, put in place in order to farm individuals at the individual's expense. The more education one receives, the more amenable one is to being 'farmed' according to the terms that were inserted. To me, this is the installation of an unnatural and servile mentality, which once adopted makes the person easy to harness - the person will even think being harnessed and 'in service' is right and good.

The problem is that these principles were not their own. These are like religious beliefs, and unlike principles founded according to personal experience. Received principles will always be unnatural. Acting according to them, is to act in an inauthentic way. However, there is no material reason to address the inauthenticity, as when one looks around, everyone else is doing the same. This results in a self-supporting, collective delusion.

In my view there are answers to what the self is - but 'society' cannot teach you them - it can only fill you with delusions. Imo, you would be on a better footing to forget everything you think you know (this costs you nothing) and do something like apply the scientific method personally - let your personal experiences guide you. Know the difference between 'knowing' because of experience and 'belief' because you were taught it. Even more simply, know thyself.


My position is that we are nothing but our circumstances(I'm assuming that we're in agreement that genetics, pre-birth nutrition etc, are part of these circumstances and not of the 'soul' you're after?), or to put it more directly: We are our circumstances. Our Soul Is That. There is nothing that is "already inside".

The tree does not exist in isolation, separate from the patterns of rain and sunshine that shape its growth. "The separation is an illusion".

I have indeed been on the same path as you of trying to shed delusions and applying the scientific method, and have up to this point found no indication of any "causeless cause" to steer me besides the fundamental is-ness of the universe.

Put bluntly, I believe that if you hadn't started with the assumption of a soul, you would be entirely unable to arrive at the conclusion of a soul by rational methods. And starting by assuming the unproven instead of emptyness is epistemological cheating.


> There is nothing that is "already inside".

Have you seen babies, or puppies? You would easily be able to confirm for yourself that creatures are born with distinct personalities. Its not just chemistry or nurture.

> "The separation is an illusion"

But you don't really think this. You don't really think you are a tree. You do think you are distinct.


>You would easily be able to confirm for yourself that creatures are born with distinct personalities

Refer to my previous post: "I'm assuming that we're in agreement that genetics, pre-birth nutrition etc, are part of these circumstances and not of the 'soul' you're after?"

That's not some mysterious transcendant soul, that's genetics. Literally the exact same thing as a computer program. Dog breeds are specifically bred(programmed) to exhibit certain character traits, for example.

>You don't really think you are a tree. You do think you are distinct.

You missed the point of the argument. Just as the tree is not separate from its circumstances, neither am I.

You brought up "know thyself" so I assumed we were pulling from a similar corpus and brought up "the illusion of separation" as a mutually familiar point that didn't need much elaboration, sorry about that.

Also, it's not so much that I "think" I am distinct, more that I "believe" it, to put it in the terms you used earlier. I am conditioned to consider certain things "me" and others not.

Really I am no more distinct from the tree than, say, my fingernail is distinct from my nosebone. They belong to the same Individual.


> Dog breeds are specifically bred(programmed) to exhibit certain character traits, for example.

And yet all dogs have their own unique characters, no? They are not the same individual, right?

> You brought up "know thyself" so I assumed we were pulling from a similar corpus and brought up "the illusion of separation" as a mutually familiar point that didn't need much elaboration, sorry about that.

I don't know what corpus you refer to. Please explain if you like. I'm not basing what I'm saying on a corpus - of course I've read books, but I am giving you my personal view on things.

> Also, it's not so much that I "think" I am distinct, more that I "believe" it, to put it in the terms you used earlier. I am conditioned to consider certain things "me" and others not.

I have heard this sort of (nondual) thinking before and completely dispute it. I personally cannot access anyone else's mind or body, I haven't no idea what you are thinking. I can only pretend to be doing this. There is a self, we live it continuously. There are times when we are fully present, where we are so in the immediate experience, that we can move out of linguistic/common concepts perhaps, but this is still within oneself.

For me, it is more that each person is a world in their own right, rather than "us" all being in the same universe. We simply do not have the level of interconnectivity you believe is there, when you say you are the tree or me. Furthermore, it really is very hard to see the point you are making when we have a disagreement - plainly there is a distinction.


You're either outright refusing or unable to see the point I've been making about the breeds: The traits are physically programmed in, whether individual or familial, not "already inside" the individual's soul. You aren't tracking that part of our conversation properly.

On the "corpus" point: It's not about not "giving my personal view", it's about drawing from a shared lexicon, of terminology, of lenses through which to view and analyze That Which We Are Talking About. My "home" in this respect is mostly in Hindu Yoga, (Zen) Buddhism and Daoism. You will find in those corpus-es(corpi?) essentially the exact conversation we're having right now, and find addressed the questions you have, in a wonderful plethora of different ways. Any other religion's mystic branch, or western occultism or alchemy similarly. If you want a specific recommendation for an entry-point, I could recommend giving the Bhagavad Gita a shot and seeing if you "vibe" with the way it explains things. If you skip the (usually) included commentary and only read the core translation, it ought to be a fairly quick read.

The nonduality point: Your body cannot access others' experience any more than my fingernail can access that of my nosebone, sure. But again, that does not mean they aren't part of the same organism. The fingernail and the nosebone do not make independent choices, the choice is made for them by the meta-organism(my body). Similarly, the argumentation might go, the tree and I do not make independent choices, but are governed by the same meta-organism(Nature, if you will, or perhaps "The Universe", but I suspect that term will turn you off since it might evoke the image of new-age-hippy woowoo).

I'm saying that if you insist that the body/mind/whatever you currently refer to as "you" is your "Self", you are taking "the fingernail" to be your Self instead of "the whole person". "Plainly there is a distinction", yes. But at the same time, there is also an underlying interconnectivity.

>Furthermore, it really is very hard to see the point you are making when we have a disagreement

That is perhaps the wisest thing either of us is going to say in this conversation. This format does not serve high-effort posting very well, I know I'm not doing the best I could be.

Perhaps we'd shelve this discussion for now? If you care to continue more deeply, you could shoot me an email at any point in the future(see my profile), and I again heartily recommend the Bhagavad Gita. Or perhaps, if you're more rational-thinking oriented, you might enjoy(the even shorter) Yogasutras of Patanjali. Or have you checked out Yudkowski's "Sequences"[1]? That one's completely down-to-eath, no spiritual terminology or metaphors (or non-dualism I'm pretty sure!), and covers a lot of the same ground my eastern background does.

[1]https://www.readthesequences.com/


> You're either outright refusing or unable to see the point I've been making about the breeds: The traits are physically programmed in, whether individual or familial, not "already inside" the individual's soul. You aren't tracking that part of our conversation properly.

I don't dispute traits. But the traits idea fails to address the unique characteristics of each dog.

It seems I'm not tracking the things you want me to track, terminology, science, traits. But then, as I said in the first place:

> For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else.

I can tell you are sincere with your investigations, but I can't help wondering whether direct observations of reality, the development of a personal outlook on reality, use personal experience as primary source, is ultimately more valuable than familiarity with a corpus. But then I would say that. And you would disagree.


Again, you are not getting what I was saying about the corpus. I am pulling from a vocabulary to express my personal outlook from personal experience, from direct observation. It's not either/or. You are the one completely rejecting half of all power-of-truth-finding available to you, and calling it intelligent? I'm explaining mathematics to you and you're complaining that I'm leaning on centuries of established proofs instead of, what, inventing a new lexicon just for talking to you?

I am giving up. You are engaging with the points in your head instead of those on the page.

You match the spirit that you comprehend, not me.


Being an intelligent being is not the same as being considered intelligent relative to the rest of your species. I think we’re just looking to create an intelligence, meaning, having the attributes that make a being intelligent, which mostly are the ability to reason and learn. I think the being might take over from there no?

With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.


Why would the dumb intelligence be less constrained than a human in making itself smarter?


I have yet to see an LLM with hands, feet, or eyeballs.

Currently, LLMs require hooks and active engagement with humans to ‘do’ anything. Including learn.


> every resource will be spent in making it smarter

The root motivation on which every resource will be spent is simply and very obviously to make a profit.


So tired of this argument.


> ChatGPT (o3): Scored 136 on the Mensa Norway test in April 2025

So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.


Does an LLM scoring well on the Mensa test translate to it doing excellent and factual police reporting? It is probably not true of humans doing well on the Mensa, why would it be true of an LLM?

We should probably rigorously verify that, for a role that itself is about rigorous verification without reasonable doubt.

I can immediately, and reasonably, doubt the output of an LLM, pending verification.


> the metric of how [the uninformed] generally measure intelligence


How do the informed measure intelligence?

I know I'm too late to ask this question, But I suspect its either; Feelings and intuitions, which is just a primitive IQ test. Or some kind of aptitude test, which is just a different flavor of IQ test.


Court reports should as much be about human sensibility. I have met plenty of high IQ people who were insensitive.


Having listened to some the new AI generated songs on utube, looks like they might be better at being sensitive humans than we are as well..


Where do you imagine they copied those human sensitivities from? The weather?


The same place as humans do, other humans.


Yeah I certainly associate LLMs with high intelligence when they provide fake links to fake information, I think, man this thing is SMART


Isn't it obvious? Near future vision-language-action models have obvious military potential (see what the Figure company is doing, now imagine it in a combat robot variant). Any superpower that fails to develop combat robots with such AI will not be a superpower for very long. China will develop them soon. If the US does not, the US is a dead superpower walking. EU is unfortunately still sleeping. Well, perhaps France with Mistral has a chance.


Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.

The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.


I agree that people won't accept degrowth.

This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.

And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.

Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.


If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.


But why do you think that? Energy use is a matter of availability, not purely of technological advancement. For sure, technological advancement can unlock better ways to produce it, but if people in the 50s somehow had an infinite source of free energy at their disposal, we would have boiled off the oceans before we got the Internet.

So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.

And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...


Meat is not necessary.


The only way to phase out meat is to make a replacement that actually tastes good.

Come to the american south and ask them to try tempeh. They'll look at you like you asked them to eat roaches.

It's a cultural thing.


Taste has nothing to do with it; 'tis is all based on economics and the actual way to stop meat consumption is to simply remove big-ag tax subsidies and other externalized costs of production which are not actually realized by the consumer. A burger would cost more than most can afford and the free market would take care of this problem without additional intervention. Unfortunately, we do not have a free market.


I would much rather lobby for ending ad gag laws, and fighting for better treatment of animals.

I think its more realistic than getting people to give up meat entirely


You cannot treat a commodified individual "better" - it is only possible to euphemize such a logical fallacy.


So there's no point in pushing for pasture raised, and it's either all or nothing ?

I think incremental progress is possible. I think rolling back and gag laws would make a positive difference in animal welfare because people would be able to film and show how bad conditions are inside.

I think that's worth pushing for. And it's more realistic than everyone stopping eating meat all at once.


The economics of what you describe are impossible. The entire concept of an idyllic pasture is actual industry propaganda which is not based in objective reality.


I think getting everyone around me to stop eating meat is not based in objective reality.

If we had better animal welfare laws and meat became prohibitively expensive, I would be absolutely fine with that.

I think incremental progress is possible. We shouldn't let perfect be the enemy of good.


People will eventually stop eating meat because it is unsustainable, but unfortunately not without causing a great deal of suffering first, and your comment is an example of why this process is unnecessarily prolonged. It is clear you have not done much research on actual animal welfare based on your "pasture" argument alone. I am even willing to bet you think humans currently outnumber animals, when the reality is so much more troubling.


>I am even willing to bet you think humans currently outnumber animals

I'm not sure what makes you assume that about me. I'm well aware that there are more animals than humans?

It's clear that this is no longer a productive discussion about animal welfare.

----------------------------

"Be kind. Don't be snarky. Converse curiously; don't cross-examine."

"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


> I'm not sure what makes you assume that about me.

I'm not sure why you're not sure; the parent comment explained it already: your vision of an idealized pasture is incongruent with reality, namely because the number of animals and resources it would take to materialize and actually sustain such a system defies reason.

This was never a discussion about animal welfare, but about challenging industry-seeded assumptions which were not even being questioned. It is unfortunate this makes you feel threatened and requires a retreat from the conversation, but it is also typical.


Comfortable clothes aren't necessary. Food with flavor isn't necessary... We should all just eat ground up crickets in beige cubicles because of how many unnecessary things we could get rid of. /s


Ridiculous overreaction.


This is a subjective value judgement and many disagree.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: