Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Human brain organoid bioprocessors now available to rent for $500 per month (tomshardware.com)
78 points by RyeCombinator on Aug 28, 2024 | hide | past | favorite | 142 comments


Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus


I am one of co-founders of this company. Many of the questions asked here are answered in our recent publication: https://www.frontiersin.org/journals/artificial-intelligence...

I will be happy to try to answer any other questions!


How confident are you that these brain organoids are incapable of qualia and thus suffering?


Username relevant.-

PS. Also, while congratulating the company, I also wanted to appreciate and second the question ...


We have no information on this topic. Consider that at this time we have the number of neurons equivalent to a larva fly.


Flies have demonstrated signs of consciousness so if that's supposed to be reassuring, it's not.


Compared to chips, where would flies be? 1 bit microcontroller level? Where our brain is tops of course (latest generation AMD or Intel)


What does a "Hello World" for something like this look like? Is it a API, do I push a docker image to it or something? How do you interact with this sort of system?


There a Python interface. The API enables to send stimulations, measure action potentials and deliver dopamine in real-time. There are 2 videos that explains this in more details on: https://finalspark.com/neuroplatform/


ahh I didn't see the videos, thank you.


What does it feel like to be a real-life mad scientist?


Is the total mass of the organoids in your system greater than the mass of a single human brain?


Envelope calculation: 1 brain worth of neurons would make 10 million instances available for hire, and at 500 USD per instance would give monthly recurring revenue (MRR) at 5 billion USD.

Not too bad for a startup!


Honestly I see "rent your brain compute for money" as a real thing in the future.


I live that life now. It's a bit demeaning at times, but it beats physical hard labour.


No, it is way less. Consider we have about 10'000 neurons per organoid while our brain has about 100 billion.


Ethical objections in this thread are a red herring.

Our brains are not that different than other primates' brains. We do abhorrent things to other primates that __demonstrably__ have similar experiential qualia as humans, mostly for the benefit of our species. But somehow objectioners are losing their marbles over a human cell cluster that has no more than a few thousand constituent cells.

The computational ability of human brains is an unintentional happenstance of billions of years of random-walk evolution; there's no reason to believe that we can't make systems that are even more efficient with intentional design. If these organoids get us there, I'm all for them.


The fact that we do more horrifying things to other species is not a justification.

We shouldn't be doing those things. The fact our legal structures haven't banned them yet does not invalidate the ethical concerns about using human brain tissue in this manner.

It's not just a cell cluster, it's a cluster of brain cells, the organ that is most likely linked to consciousness. We don't understand how that organ is linked to consciousness, so we don't know at what point and structure a cluster of neurons would develop it. Given that, this is an extremely dangerous path to start walking down.


This is going to come across as inflammatory, but I promise I’m not intending it to be:

Can someone give me the argument for and against for why this different than the ethics of abortion?

It occurs to me that if anyone concerned about the possibility of consciousness and therefore suffering in bio-organic computing, is the consistent position to be against abortion after 2 weeks, which is when the brain begins its development in the fetus?


Death and torture are different things.


Murder and torture, you mean.

If the consciousness possesses a capacity for suffering, torture as you say, the taking of its life is murder.

Both are immoral…


Euthanasia is not murder. There are many situations in which death is the desired choice.

In the case of abortion, you have one being (the fetus) that isn't yet fully formed and capable of living on its own and another being (the mother) who is potentially harmed by the first being's life. Yes, it is a morally complex situation.

We don't know when the first being becomes conscious. But we know the second being is conscious. And we know the consequence of treating the fetus as a being with full rights is to deprive the mother of her rights, frequently her right to life.

Frankly, I have first hand experience with this I strongly doubt many abortion opponents have. If not for abortion, my wife would probably be dead after our wanted pregnancy turned out to be non-viable. We aborted our son while he was still alive, because if we hadn't she could have gotten sepsis as he slowly died and rotted in her belly.

People who treat abortion as a black and white moral issue the way you appear to be doing have now banned it where I live, which means my wife and I cannot risk having more children, given our history of loss. (Of which the one I described above is only a small piece.)

Abortion is morally grey. There are few other ethics cases where there is such a direct conflict between the rights of an existing being and the rights of a potential being. The only reasonable way to handle that is to allow each individual to make their own moral judgement and choice.


Death is only a relevant concept to the living. Someone who understands nothing about their existence, who has no desires, no memories, can't possibly understand death.

If I cut my arm off, I'm "murdering" trillions of organisms. Is that immoral?

I think suffering is only possible to much more complex organisms. You can't mourn if you don't remember for example.


Dangerous for what?


Dangerous for the potential consequences we cant predict, and the violation of ethical considerations for a sentient organism.

We dont fully understand consciousness or how it works, and by experimenting with these, we could inadvertently create something capable of suffering or even self awareness. Without a clear understanding of these processes and how they work, it's downright reckless and unethical to assume theres no risk. Dbingham rightly points out that we're treading into unknown and potentially unethical territory.


We couldn’t predict consequences of most things, from wheel to eletricity to internet. People predicted mailmen on flying bicycles delibering mail to cloud houses. Best predictions were rare and still shallow. Still worked more or less. I don’t get this part at all.

I believe the most important part for ethics-concerned commenters here are ethics. But I have a hard time reasoning about it while living on a whole planet with (roughly) 90% of humans living under poverty line, 9.9% of humans living as modern-era slaves, and 100% of organisms that ever lived suffering in an incountable amount of ways. To me it’s more dangerous to leave this as is than experimenting on yet another 0.001% of biotissue.

Even when I try hard to take an ethical position, it seems moot to me. I still can’t be sure that my spinal cord isn’t suffering all these tears without me knowing. I mean yes, less suffering would be great, not even by ethics, just from empathy. But neural tissue that has no mouth and probably must scream is too effing everywhere to single some cell collections out. Nature doesn’t care. Scream if you can, and if you can not, too bad. I just can’t find this balanced view that you guys have, it feels pretty self patting on the back to me.


Hacking someone's brain to mine cryptocoins in the background may not ever be possible, but if it is, this is how it all starts.


>Our brains are not that different than other primates' brains. We do abhorrent things to other primates that __demonstrably__ have similar experiential qualia as humans, mostly for the benefit of our species. But somehow objectioners are losing their marbles over a human cell cluster that has no more than a few thousand constituent cells.

This is some kind of strawman. I'm confident that people who are against human cell cluster torture are also against primate torture. Or at least make the mental calculus required to justify it.


There isn’t enough time for a random walk evolution to result in the structure we see. It’s more complicated than that.

It seems the universe in the form of humans was meant to contemplate itself.


Did not expect an earnest argument for a cosmological teleology on HN today, thank you.

I would just say its a whole lot of trouble to get humans to do something they aren't even doing!


Obviously, no human has any experience, nor any expertise, on whether there was enough time.

So the argument pretty much turns into "my feelings" vs. Occam's Razor.


Amusingly, I suspect that supporters of both the teleological position and the non-teleological position think Occam supports them.

For what it’s worth, the only empirical macroevolution we’ve done, from domestication, to Monte Carlo simulations, to genetic algorithms, to LLM training, is all goal directed.


> Amusingly, I suspect...

Yes...though I'd say that the teleological folks have emotional reasons to judge an ever-so-specific deity (who they already believe in) to be a conveniently minimal hypothesis. Which is a good example of a usually-necessary heuristic - pushing complexity, costs, unknowns, and other unhappy things far enough away (in real or metaphorical space/time) that you don't have to worry about them now.

> FWIW, the only empirical...

All the budget requests for sextillions of organisms, simulated for billions of years, were rejected. So, unfortunately, some large corners had to be cut...


Out of curiosity do you have any kind of argument or supporting evidence for your position or just straw men and snark?


If you dramatically redefine random to include choices between useful outcomes (ie. the non random boundary conditions to the decision tree), then Occam’s razor works well here.


I would gladly hear more of your take on this.-


Would require beer near Minneapolis! And a jar of organoid bioprocessors.


Can do! What is your research field?


(Beer we could probably work out ...

As for the "snacks" ... :)


The best part of the article for me is right at the top.

> When you purchase through links on our site, we may earn an affiliate commission.

Tom's Hardware was one affiliate link away from bumping the cyberpunk index of this article to 11.


Tom's Hardware, soon to mean "Tom is Hardware".


Putting the ethical dilemmas aside, I'd like to know how such bioprocessors could possibly have:

>million times greater power efficiency when compared to digital processors

if bioprocessors have to support("run" the metabolism) all their organelles, including parts that are not at all involved in the signal processing, which is I suppose >99% of the cell, compared to digital processors that we literally built with the sole purpose of performing such operations and have logical gates close in size to the single layer of atoms already? What did we miss in the design?


Much fewer electrons have to move for a detectable chemical reaction than a detectable electric current. Solid state systems are just much less efficient at computations, but they are much easier to organize.

Compare a copy operation on a DNA sequence in a cell with a copy in a memory, the cell does it extremely cheaply since it is a simple chemical reaction while the memory has to flow electrons in a giant network of nodes. You can easily copy exabytes of data using chemistry (DNA) with almost no energy.


It’s interesting that each “processor” needs a camera which is made up of inefficient transistors and other leaky electrical components and there is still a claim to efficiency surplus.


I'm not sure "The Matrix" was meant to be a How-To Manual... because this sure feels like the first step to humans in a tube.


The Matrix official canon is that humans were being used for energy, which didn’t really make sense until you interpret that as using humans as highly efficient information processors. This company does awfully seem like step 0 here.


The original story had the AI processing all done on the 90% of our brains we don't use (ignoring that particular bit of bs), but Warner gonna Warner.


I think there is a plausible argument, believe it or not. See my essay "Why are Humans used as Batteries (a power source) in the Matrix?" https://dwheeler.com/essays/humans-batteries-matrix.html

Abstract: "In the fictional world of The Matrix, I propose that the machines might use the humans as a power source (as “batteries”) not because the humans are a good power source, but because doing this allows the machines to avoid committing genocide - as would otherwise be required by their laws. This compromise could have been necessary to prevent a machine world civil war."


There is also a question of if after operation dark storm, was there any other power source suitable to be able to power all of the machines? Even if they did not have a law against genocide (which considering the resets that happen in the matrix killing all but 15? 16? whatever the number is not sure that law exists), like Operation Dark Storm was a last ditch effort, was this also a last ditch effort on the machines side.

But at the same time the theory that maybe they are against genocide in some form is backed up by Smith talking about how the original matrix was going to be a utopia. That doesn't seem like the kind of thing you would do to the people you are fighting (and just tried to wipe you out completely), if it wasn't for some ethical guidelines.

Fuel would be a limited resource. I know that a lot of people point out that on a biological sense it does not make sense, but we also see Neo being pretty frail when he wakes up. Does that biological problem still stand up when your body is frail and it is basically just your brain and basic processes. Throw in that your not eating real food, likely optimized for as little energy lost to process that food.

Something that I don't think any of the media touched on, but would it be crazy to think that the machines could have made genetic modifications to support this.


That's a fun idea I'm definitely going to bring up next time the battery thing is mentioned :)


My impression is that you can rent actual live humans, for far less than $500/month, if you're not picky about their circumstances or locations.


Every good bubble-bro knows that Don't Create The Torment Nexus was clearly an instruction manual. :)


So... where exactly do the neurons come from? Are these donated neurons or grown in a lab from samples?

This seems... questionable at best. Not really comfortable with the idea of this...


Maybe I'm too cautious, but - from what I understand our research into consciousness isn't very thorough. Meaning, we do not know the causes and thresholds for consciousness.

So, to me, it seems like until that threshold is very clearly established, any work with 'organoids' is hazardous at best, but more than likely just wildly inappropriate.

Am I being a luddite?


In my opinion, consciousness is a philosophical issue, not a scientific one. You have no way of truly knowing whether any other entity, including myself, has a consciousness. The interesting thing is that even if we imagine some ultra-advanced tech where you could somehow brain swap with somebody, and you still had a consciousness, you still wouldn't really know if they had a consciousness because you end up begging the question there. Did you simply maintain your own consciousness, or did it come from the new brain?


Yup.

Nobody reading this comment knows if there's a human by the keyboard.

Heck, if you go for A J Ayer and "there is a thought now", then even I don't really know if I'm the real me, or if I'm instead just a vision-and-text model seeing the image of the phone and a finger swiping over the keyboard, to be switched off when the "real me" is satisfied the output proves some point.

You may argue that I can perceive myself and therefore know that I exist (I certainly took that approach at school contrasting the previous Ayer quote with "cogito ergo sum"), but even that doesn't tell me that what I experience as consciousness is what you experience as consciousness.

Indeed, things in the same category as aphantasia, sleep deprivation, and excessive caffeine consumption, all suggest to me that we all have very different experiences of being conscious at the best of times and even without such thought experiments.


What do you think is more likely, every other human and probably most animals experience consciousness in much the same way that you do, or you are unique and special and your consciousness is much more lucid than everyone else?

I have always been so puzzled by the line of thinking you present here. It is clear that everyone else responds to pain in much the same way, everyone else responds to lack of food, water, social interaction, love, etc In much the same way, etc. The differences are at the fringes and your examples of sleep deprivation and caffeine consumption only serve to reinforce that : doesn't matter how much coffee I drink or how little sleep I have, its going to hurt if my bones are broken just like it would for you.

I think this line of thinking is a cute little thought experiment but it really falls apart quickly - if we operated like we didn't know if other people were conscious to the same degree we were, then we would behave abhorrently. And largely, we don't. So even you if intellectually will posit that you are the only truly conscious person, or that maybe even you are not conscious, you do not actually behave this way in practice. Obviously


I'd go further than this - consciousness only exists in social context. Or in a dynamic if you prefer. Children raised alone, or suffering extreme neglect (e.g.: infamous Genie case) suffer enormous cognitive impairments, including language acquisition and theory of mind. Prisoners kept in isolation frequently descend into psychosis and suffer measurable neuronal shrinkage. Even moderate social isolation is correlated with depression.

Consciousness emerges and is maintained in the context of other people - and not just individuals but communities whose language, techniques of adaptive living (i.e.: traditions), and storytelling about the word, are carried intra-individually within clan and affiliate groups. In the postmodern era those communities can be parasocial or virtual to an extent, fictive or reenacted, but they're not optional for the continued coherence of self.

There is no 'I' except in relation to the other, moreover the self doesn't become fully individuated without being witnessed and held by the other - caregivers in early childhood, friends in adolescence, social roles in adulthood.

All of this would be impossible if our phenomenology wasn't broadly similar. All societies have identified social roles for those few whose inner world isn't commensurate with the collective - in modernity we call them schizophrenics.


What do you think is more likely, every other human and probably most animals experience consciousness in much the same way that you do, or you are unique and special and your consciousness is much more lucid than everyone else?

I’d separate “experiencing” from how it works here. It doesn’t have to be more lucid, all these people similar to you may not experience it and may only “work” properly, while you both work and experience. In this, you also may lose it anytime and the body you have wouldn’t even notice.

As of how unique you are, well, you may have it and they may not. You can’t tell due to “wouldn’t notice” part. You (both body and it) can only insanely loop back, converge on it for a while right now, until routine gets you again. Imagine a book about a guy who reads that same book. That’s the theme here. The author decides to never go back to this fourth-wall break and that’s it. Or he just stops writing in the middle of a sentence. Or something else. Whatever he does, the guy in the book has zero choice, despite realizing all of it clearly.

it really falls apart quickly - if we operated like we didn't know if other people were conscious to the same degree we were, then we would behave abhorrently

Only if the “work” part tends to this and loops back enough. Which is unusual for humans. People barely ask themselves questions about consciousness, and many actively(!) avoid the talks. You may think that free bodyless geneless part of “you” is something that makes decisions, but does it?


I'm sorry, this entire comment does not make any sense to me. What do you mean by "work"? in your second paragraph you keep using "it" without Identifying what "it" is. Also you keep saying "loop back" I don't understand what you mean by this. This sounds a little like some LSD-style word salad.


He's referring to consciousness, and the fact that we may all be more of passengers than drivers. The entity inside of you, experiencing everything like a highly interactive movie, is your consciousness - you. And we tend to think of this as what drives our actions, because it certainly feels it. I'm 'thinking up' these words, and then writing them down, so certainly this is me - the observing entity within this body?

But that's somewhat begging a question of free will. Did I choose these words, because I literally chose them or is it simply a product of various physical processes within the entity that I'm 'attached' to and 'observing'? And if this latter possibility is the case, then the presence of "me", the consciousness, serves absolutely no purpose beyond being a 'passenger.' And an entity with or without such a thing would behave identically. This could, for instance, include compelling descriptions of consciousness.

This is one of the main issues I have with consciousness. If you write a computer program to give you the value of a variable, add a couple of numbers, or other fundamental process then you certainly don't believe some entity suddenly poofs into existence, imagines itself doing such an action(s) of its own will, and then poofing our of existence. Yet, if one is to believe consciousness is suddenly emergent then this must become true at some point or 2, 3, or 2^100 actions, which I find no less absurd.


A sibling commenter got my line of thought right.

It sounds like LSD trip because it’s about a topic that is naturally trippy.

By looping back I meant that in a passive passenger (experiencer, observer, you name it) & a physical body situation, the latter may coincidentally be driven into thinking about the former (as we were itt), but that doesn’t really mean the two are interacting. The passenger part doesn’t even have its own mind cause its only mind is in the body.

By “work” I mean the physical body. Esoterics aside, our bodies are physical objects which “just work”. They don’t need a metaphysical observer to do human things. Bodies without a “passenger” are usually called p-zombies in philosophy.


> What do you think is more likely, every other human and probably most animals experience consciousness in much the same way that you do, or you are unique and special and your consciousness is much more lucid than everyone else?

I think that's a false dichotomy; third option is that everyone is different.

Also notice I suggested that I might be "only" a model that doesn't even realise it's a model. I sure know I can make models pretend to be specific people, so it's not unreasonable for me to wonder if I can make a model trained on my own public data, at which point it becomes reasonable to wonder (if you assume for the sake of argument that the model is indeed conscious, who knows, roll with it) if that model might post something under the experiential delusion that it's A Real Boy.

Would my digital twin know it wasn't the original? What question could it ask itself that upon introspection would reveal the truth?

> It is clear that everyone else responds to pain in much the same way, everyone else responds to lack of food, water, social interaction, love, etc In much the same way, etc.

Pain, no. I can switch mild pain off at will. I know people into BDSM, who confuse me massively by enjoying it. I know someone who appears to lack the qualia (not the nerve response, just the qualia) of direct pain. The are also people I see in the news occasionally with literally no pain nerves, and they regularly injure themselves as a direct consequence of this, so no, broken bones isn't as good an example as you think it is.

(Edit: and another example in the opposite direction, I've also met someone disabled by pain that has no apparent cause).

Food and water, well, my mother had Alzheimer's and what killed her in the end was forgetting how thirst worked. When my dad got bowel cancer and his lower intestines removed, he almost destroyed his kidneys because his thirst reflex didn't come close to the impact of the now-missing moisture absorption. But it doesn't even require end-of-life illnesses to encounter such issues: a decade ago in a different country, I used to know someone who regularly didn't drink enough fluids because they didn't feel thirst, and got kidney stones as a result.

Food is also a common oddity. I'm currently dieting, my hunger is more of a suggestion; yet the need to lose weight comes from the fact that I've previously been ravenous.

Social interactions: I'm an introvert with a handful of connections, and ex of mine connects to so many people so easily that during the pandemic, her online birthday video call had about 90 minutes of her welcoming new people and telling everyone in one sentence how they were connected. I think she invited more people to that than I can actually name from my life.

> if we operated like we didn't know if other people were conscious to the same degree we were, then we would behave abhorrently

We do behave abhorrently.

You mention animals in the opening paragraph: if animals are as conscious as humans, then meat has the moral standing of industrialised murderous cannibalism, and even dairy has the moral standing of industrialised nonconsensual impregnation.

Fear of this possibility is why I am vegetarian, and why I repeatedly attempt veganism.

We're also pretty bad with out-group humans, though to a much lesser degree. It's why we're willing to go to war, and why most of us demonstrably don't care all that much about civilians dying so long as they aren't of our own nationality — How many genocides happened since 2000? I think most of us don't know the names of the groups, let alone the individuals.

> or that maybe even you are not conscious, you do not actually behave this way in practice

LLMs do not necessarily behave, though they may be so trained, as if they think they are not conscious.

Likewise a VHS tape of Brent Spiner in the 80s and 90s wearing white face paint: the actor is (of course I assume that), but the VHS itself isn't despite displaying the real human actor demonstrating these behaviours.

It tells me nothing about the underlying nature either way.


Again, I think you are focusing on differences at the fringes. I understand your argument, and I think my contention with your view is perhaps tainted by the fact that I have met people with similar viewpoints who ultimately arrive at the conclusion "I can only be sure that I am real, everyone else is a construct of my mind" and I find that view to be completely despicable and even moreso- completely asinine. But this isn't really what you are saying. You are right, it is quite clear that consciousness is some kind of spectrum, and people have different, perhaps even wildly different qualia from the same stimuli. My only real contention, I suppose, is that this does not suggest at all that some people are so different that we can only consider them to be virtual, without consciousness, or even simply constructs of our own imagination.


> Again, I think you are focusing on differences at the fringes.

Ah, I see.

My perspective is that this comes up so often, in so many people, that it reveals an underlying truth: there's no such thing as "normal", not even in humans.

As you say, we agree that it's a spectrum :)

> "I can only be sure that I am real, everyone else is a construct of my mind"

No solipsism here: even though is unclear to me how to determine the boundaries, I still have a probability distribution that's got most of its mass somewhere between "most humans are conscious" and "most big mammals are conscious" — the fact I can't prove either panpsychists or A J Ayer wrong doesn't mean I have to take either seriously.

Now, that said, "most" humans. There's people who act like they're not really there. Is that behaviour indicative of an inner absence? IDK.

Even in the most extreme human case, that of anencephaly (*do not* google that unless you have an iron stomach), I'd rather not let them be treated poorly, for the same reasons I will never consume insect-based food: just because they fall outside the region where I put most of my probability of finding consciousness, doesn't mean I'm actually confident that it's absent.

(This also applies to AI, but as they share zero evolutionary history I have zero reason to expect an AI which does have subjective experience mapping "what they do" and "how do they feel about it" in anything like the way we do; could be all laughs and smiles on the outside while hating every second, just as Stephen Fry describes some of his depressive episodes, except even then this is over-anthropomorphising and even an AI modelled on a human brain scan may be much more different from this example than any two humans are different from each other).


Well, you can give someone anesthesia, which turns most of the neurons in the brain off, and then you lose consciousness. The fact that we can target the brain specifically to turn consciousness on and off seems to suggest that it's a physical property of brain function, not just a philosophical issue.


You're begging the question, because you're assuming that not only do I have a consciousness but that it's turned on and off by the brain. If I don't happen to have a consciousness then clearly anesthesia is not going to regulate it in any fashion.


this relies on the fact that consciousness is an overloaded term. being conscious (awake) refers to something different than being conscious (thinking) which may both be unrelated to the brain being on or off, which again may be unrelated to the capability to have experiences.

all of these use the same word, they each may or may not be the same thing.


> consciousness is a philosophical issue, not a scientific one

But surely it is both? It may be true that philosophy is the only line of inquiry that allows us to explore very far due to a lack of scientific progress, but but even a purely philosophical exploration has scientific implications, and even if we don’t have scientific answers in practice, that doesn’t means that there are no answers in principle.


Arguably the single most critical tenet of science is falsifiability. For something like consciousness, falsifiability is completely out of the question as we have no way to evaluate it whatsoever, let alone falsify claims made one way or the other.


Critical in terms of the scientific method itself, sure.

But the unfalsifiability of consciousness only points to the hardness of the problem and the current limitations of science, not the importance of understanding consciousness and its implications.

Put another way, even if science can’t currently tell us anything solid about consciousness, the implications of what it can’t tell us are still highly relevant to the kinds of science we do, e.g. we still treat other humans as if they’re conscious and this forms the basis for most ethical decision making.


There's a difference between hard and impossible. Trying to create any sort of formal ideas around something that cannot be measured, quantified, or evaluated in any way, shape, or fashion (and may likely never be able to be so) is simply not possible. It's like trying to create a scientific framework for a God. It just doesn't work. As for ethics, one point we may differ is that I don't see consciousness as inherently leading to anything. Whether awful, or great, people have consciousnesses, or not, does not change my view of them.


You seem to be concluding that because the problem is difficult if not impossible to fully solve, there’s nothing that can be done at all.

You’re also suggesting that because something cannot be perfectly formalized, there’s no value in exploring the problem given what we do know. This seems problematic.

Regarding ethics; the point is not that consciousness is somehow itself responsible for ethics. The point is that despite our incomplete understanding of consciousness, we use our first person lived experience as a basis for collectively formulating rules about how we interact with other humans. The fact that I can’t formally prove that anyone around me is conscious or what it even means to be conscious doesn’t mean I should feel free to treat those people as non-conscious objects.

> Trying to create any sort of formal ideas around something that cannot be measured, quantified, or evaluated in any way, shape, or fashion

The fact that I’m having this conversation with you is a rebuttal to this.

You’re correct that there is a particular kind of measurement that we cannot currently take, i.e. some kind of lower level diagnostic of consciousness itself, whatever it is.

But I disagree strongly that there’s nothing to measure or evaluate. Just one simple example, but why is it that we all seem to experience pain? And how is it that we agree pain exists? Why would most people understand a question like “how bad is the pain on a scale of 1-10?”

We have drugs to help mitigate pain, and we have stats about how efficacious some drugs are vs. others, when one drug is more appropriate for a given situation, etc.

How does any of this exist if we have no way, shape or fashion to measure conscious experience?


The point I was making about ethics is that consciousness, or lack thereof, has nothing to do with how I treat other humans. Treat other people poorly, and they will treat you poorly - consciousness or not. So you treat people well not only out of an ethical consideration, but simply because it's the most beneficial way to act.

As for pain, you experience it because it's an evolutionary wiring in your brain to help you avoid harm. If you didn't recoil when putting your finger in fire, you'd risk severely damaging your finger. Pain saves you from yourself and is one of the most primitive stimuli we have. Even extremely simplistic multicellular creatures react to harmful stimuli or experience pain. Presumably you do not think they have a consciousness, but perhaps they do? The point of this is that there is no way to know anything. For all you know, it's even conceivable that a rock, or a star, or even a nucleus might have a consciousness. I strongly doubt all of these of course, but there is no way to conclusively say they do or they do not.


It could be argued that a similar level of caution should be applied to artificial neural networks. I always find the discrepancy in application of ethics between biology and technology to be an interestingly wide gulf.


How do you think they should determine the threshold of consciousness?


They're grown from stem cells. Neuro-ethics (especially the ethics around using brain organoids) is an entire field of research. Although brain organoids exhibit neural connections and electrical activity, they have so far failed to form even basic synaptic circuits — without which consciousness is probably impossible. But of course we should take every precaution to ensure that they DON'T form consciousness. But that's still a ways away.


What's the harm if they do form consciousness? Is the concern they will develop a willpower and work towards freeing themselves, or is it just a "don't enslave sentient beings" thing?


Ever read "I Have No Mouth, and I Must Scream?"


they tend to be from donated skin cells, in fact, which are reprogrammed to produce brain tissues. see for example https://www.nature.com/articles/nprot.2014.158


Donated by the human whose cells those were, by someone else?

That Nature article seems to discuss both approaches, using both induced pluripotent stem cells and embryonic stem cells — which of course are not donated by the human being which they once were part of.


We only use IPSC. Details of the protocol to turn them into neurons and glial cells are described in our publication. https://www.frontiersin.org/journals/artificial-intelligence...


You can grow them from stem cells.


I was once contacted by FinalSpark where they offered free early remote access to use their biocomputing platform. The platform is accessible remotely and allows experiments on neurospheres made from living cells, sitting in an incubator, in their lab, in Vevey, Switzerland. A neurosphere is a round structure build out of approximately 10’000 neurons, connected to electrodes in different places. The platform uses python scripts to communicate with the neuron allowing for various functionalities, such as: Stimulate living neurons, Read data from neurons, Log all the data in a database, and Display graphically the results of experiments for further analysis.

I was too busy to come up with a clear project idea that could beat alreadty existing stuff such as neurons playing Doom [0] (not related to FinalSpark). Still waiting for someone to show something cool using this platform.

[0] https://www.youtube.com/watch?v=bEXefdbQDjw


As someone actually in this space, does the "rental" concept give you any concerns about the quality of research this can support? Like, if the previous customer's use of the organoids will have stateful impacts that impact what you observe? It strikes me that with conventional computers in the cloud we have pretty straight-forward assurances that each customer gets the instance in a fresh state.


I am one of FinalSpark's co-founder. You are right, this can be a concern. People who rent have the option to pay for exclusive access.


How hard is it to do something like this on your platform [0]? Are there any other real-world examples where people are using your platform to do something similar?

[0] https://www.youtube.com/watch?v=bEXefdbQDjw


If you talk about learning to play doom, I would say it is unrealistic at this point and still a topic of research, the purpose of the neuroplatform is precisely to find reliable ways to train neurons to perform a specific task.


What's the timeline for expanding the 8 electrodes (I assume that is the only way to get signals in and out)?


Well, we already have systems with 32 electrodes, and we are looking at alternative to increase this by several orders of magnitude. I hope we have something next year.


I'd imagine its easy to set up independent platforms for different users. Organoids are pretty easy to develop. Large costs come from Multi Electrode Array recording devices that can be >30K.


That's a great question I hadn't thought of. Neurons definitely have state in vitro (internally and inter-neuron, e.g. synapses and tunneling nanotubes).


@finalspark: This was inevitable, please don't read this as anything other than constructive criticism; I am sure you are aware of the rat cells flying F-22 sim's in unlandable weather a decade+ ago.

Can branding hide consciousness? Hopefully not, and if not, can your group please trail blaze a ethical path if it decides it encountered it? I realize this is a "use less" or "don't do that at all" scenario, and other actors are going to enslave larger brains if they can grow them.

My intuition is larger billion+^ structures are long off, but that might not be the case. It's possible a oscillating pressure vessel with the right conditions (robust input/output feedback) may negate the need for a conventional circulatory system.

[2020] https://news.ycombinator.com/item?id=22087410


This is pretty cool. Feel somewhat dystopic though I suppose that's inevitable. I wonder how easy it will be to develop and scale this technology, given that it's housing organic entities with life support systems.


Extremely difficult, but more likely than artificial meat, it isn't so much the life support system as the constant contamination and lack of immune system that kills in vitro neuron cultures.

It is much more efficient to use in vivo neurons since they come with an immune system attached and work just as well for leaning tasks.


I am not sure lack of immune system is our main concern so far at FinalSpark. I believe it has more to do with the side effects of physical contacts of neurons (and glial cells) with not fully biocompatible materials like the electrodes.


Recommended reading (it's a science fiction piece but still relevant): https://qntm.org/mmacevedo


It's like you guys have never even heard of 'body on a chip' https://school.wakehealth.edu/research/institutes-and-center....


It always amuses me whenever a biology story here gets the tech crowd quivering, when they would normally be crowing gleefully about ASI and human obsolescence.


I took a course on neuronal computing in undergrad. I expected sci-fi; instead I was bored out of my skull. (Pun somewhat intended.)


What was it about?


Mathematically modelling neurons, basically. We were using some horrendous sketchy PDE solver that had by far the most hostile UI of any program I've ever used, and that's quite an achievement.

It was taught by this fellow, who is one of the loveliest folks I've ever met in academia: https://uwaterloo.ca/applied-mathematics/profiles/brian-inga...

Edit: Looks like the course is still around 20 years later, but has expanded quite a bit in scope since I took it. https://uwaterloo.ca/applied-mathematics/current-undergradua...


That sounds very interesting apart from the horrible software!


It’s a classical “loud minority”.


A brain is more energy efficient than a chip, but how a couple of cells connected to wires has more economic sense than a chip with gallizions of transistors?


This is closer to rent a transistor, and the main use case is to find out what its voltage curves are.

It is not a computer.

I did work on a version which used in vivo brains for actual computation. It worked but no one wanted to invest in building the matrix for rodents. With the benefit of hindsight I shouldn't have added the slide explaining ethical considerations: 'Container Labs in international waters help us manage regulatory risks'.


More like stepping stones.

Early adopters will drive down the cost.


Yeah. My understanding was that you'd need constellations of neurons to achieve a complex result, rather than a couple individual ones. Perhaps this article is light on details and it's more involved.


The article said each organoid contains around 10k neurons


But you can hire a human with a brain for less?


Not for long.

And really, can you? GPT might not be an expert in every field, but it does better than the average person at average things. It is already cheaper than a human for some tasks, because it can do so many, and fast, if looking at a hourly rate.


There are billions of potential brains, and many of them live in poverty


Managing humans isn't free. There is HR, managing, directing, blah blah blah, complaining. With AI just write an api call.

The World's Call Center Capital Is Gripped by AI Fever - and Fear (bloomberg.com)25 Posted by msmash on Wednesday August 28, 2024 @01:21PM from the closer-look dept. The Philippines' $38 billion outsourcing industry faces a seismic shift as AI tools threaten to displace hundreds of thousands of jobs. Major players are rapidly deploying AI "copilots" to handle tasks like summarizing customer interactions and providing real-time assistance to human agents, Bloomberg reports. Industry experts estimate up to 300,000 business process outsourcing (BPO) jobs could be lost to AI in the next five years, according to outsourcing advisory firm Avasant.

However, the firm also projects AI could create 100,000 new roles in areas like algorithm training and data curation. The BPO sector is crucial to the Philippine economy as the largest source of private-sector employment. The government has established an AI research center and launched training initiatives to boost workers' skills.


that's not very vc-minded of you /s


This falls squarely into the "Your scientists were so preoccupied with whether or not they could, they never stopped to consider whether they should" category.

There are so many unknowns with potentially horrifying answers.

We don't know what generates human consciousness. But we know the brain is a key part of it. At what point does a bunch of lab grown human neurons become conscious? Can it become conscious outside of a body? What would that experience be like if a lab grown brain used as a computer developed consciousness? And what would happen to that consciousness?

The potential energy savings aren't worth risking the potential horror.


> What would that experience be like if a lab grown brain used as a computer developed consciousness? And what would happen to that consciousness?

That's my concern as well, because I think it would go insane almost instantly. This would also render the whole project useless, you can't reasonably expect an insane brain to yield correct results. Or would it be a double trapped brain existing as chip, unable to move or communicate, witnessing it's neurons being used as a computing resource, but without being able to do anything about it.

It's very very scary, and there needs to be a process for testing for any type of consciousness and a plan for what to do if it's detected, I recommend just killing it instantly to prevent any suffering.


> witnessing it's neurons being used as a computing resource

What exactly is doing the witnessing here, if the neurons are being used for computation? A lot of these comments seem to tacitly assume there is a little person inside our brains who is at the controls, and that organoid computing means that these controls are forcibly taken away, causing the magic little person to suffer horribly. It's strangely childish.


> We don't know what generates human consciousness. But we know the brain is a key part of it. At what point does a bunch of lab grown human neurons become conscious?

Why just human neurons for a human consciousness?

We know very little of what sparkle consciousness but the consensus seems to be that most living things experience it with varying degree. Even though most of us, conscious humans, are absolutely fine with millions of mammals being slaughtered without much care for their feelings.


It hits different when it’s your own species. At least, for humans – plenty of animals exhibit cannibalism without a second thought.


Humans also exhibit cannibalism. In some cases without a second thought.


Another way to say it: it’s ethical slavery. We’re hacking slavery, yeah!

— Conner O’Malley (in his excellent “Standup Solutions”)


Thought Emporium on YouTube made a few videos utilizing organoid bio processors but iirc their neuron source is pig not human.

https://youtu.be/bEXefdbQDjw?si=-YVEmZhSrkgBj3I9


Honestly not sure why these aren't. Surely there are no observable differences at such tiny scale and using human cells just makes you a target of wrath.


While human brains are probably just scaled-up chimpanzee brains, individual primate neurons are significantly more sophisticated than other mammalian taxa, particularly with epigenetic changes: https://www.nature.com/articles/s41467-022-34800-w But also the plain structure of the dendrites and how they form connections: https://pubmed.ncbi.nlm.nih.gov/22230639/

Now: I don't think wetware computing is actually sophisticated enough to leverage primate-specific advantages. But 10,000 primate neurons would probably be "more powerful" than 10,000 pig neurons. (OTOH it seems like corvid neurons are even more powerful than primates, maybe we should use crows.)


Oh, didn't know that! Thought the human advantage was mostly just balancing cancerous growth to get really big or something. And yeah, we should be studying bird brains, they are so smart in such a tiny package!


This doesn't much bother me, but I also find it pointless. These will not be reproducible or easily manufactured. Maybe good for neuroscience experiments?

Now, chimeric animals made with addition of human stem cells, that would have significant ethical problems.


What can you run on it? How far away are we from this being able to train, say, an LLM?


>> How far away are we from this being able to train, say, an LLM?

Very far. Even if it was large enough and you could train it as an LLM, there's not way to read out the weights or even the connectivity graph.


Probably Doom, assuming someone gets bored during the coming weekend.


They're likely hoping you figure all that out.


I Have No Mouth, And I Must Scream.

Seriously, the hubris is stunning.


What’s the failure mode look like on this. Half your neurons died?


I can see how this would be useful for research, but on the other hand it feels like a step towards the dystopian servitors[0] from Warhammer 40k.

[0] "robots" made from criminals/clones since AI is banned


It appears the cyberpunk universe has arrived, and not in the way I expected.

The first thing I did was check if the publication date was April 1. I'm not entirely sure I believe this article. However, if it's satire, it's impossible to be sure. I guess it's real, and the world has gotten weirder.


This article made me realize I am officially old. I find it difficult to believe I could ever classify this within my overton window before I die.

Good luck to the next generation and your human brain slave computers, I'm 42, I'll be out of your way soon.


"Organic computing at a fraction of the cost of silicon? It's a no-brainer!"


We already found a way to generate porn. Can we please stop now?


I’m gonna throw up.


All I want to know is how one of these can be used to generate more ad revenue.


Sometimes the Torment Nexis is allegorical. Sometimes they build the actual thing...


Company is called FinalSpark. Sounds like an unethical final solution for AI.


FinalSpark, the 'Final Solution' to all your human problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: