Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.



This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.

Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.

So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.


> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.


Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.


What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.


Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.


Only if it’s economically feasible. If it takes a city sized data center and five countries worth of energy, then… probably not going to happen.

There are too many unknowns to make any assertions about what will or won’t happen.


> ...the fact that the [AGI] can/will work on the issue 24/7...

Are you sure? I previously accepted that as true, but, without being able to put my finger on exactly why, I am no longer confident in that.

What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level. -- Marvin to Arthur Dent

(...as an anecdote, not the impetus for my change in view.)


>Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.

Driving A to B takes 5 hours, if we get five drivers will we arrive in one hour or five hours? In research there are many steps like this (in the sense that the time is fixed and independent to the number of researchers or even how much better a researcher can be compared to others), adding in something that does not sleep nor eat isn't going to make the process more efficient.

I remember when I was an intern and my job was to incubate eggs and then inject the chicken embryo with a nanoparticle solution to then look under a microscope. In any case incubating the eggs and injecting the solution wasn't limited by my need to sleep. Additionally our biggest bottleneck was the FDA to get this process approved, not the fact that our interns required sleep to function.


If the FDA was able to work faster/more parallel and could approve the process significantly quicker, would that have changed how many experiments you could have run to the point that you could have kept an intern busy at all times?


It depends so much on scaling. Human scaling is counterintuitive and hard to measure - mostly way sublinear - like log2 or so - but sometimes things are only possible at all by adding _different_ humans to the mix.


My point is that “AGI has human intelligence” isn’t by itself enough of the equation to know whether there will be exponential or even greater-than-human speed of increase. There’s far more that factors in, including how quickly it can process, the cost of running, the hardware and energy required, etc etc

My point here was simply that there is an economic factor that trivially could make AGI less viable over humans. Maybe my example numbers were off, but my point stands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: