Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But I could be wrong - perhaps our current navigation of the problem space will reveal some strange pathway to true AI.

Most likely wrong if you compare to how most other development in engineering happened.

From my point of view, the most likely path towards some form of weakly general intelligence at this point is emergence. We keep working on these narrow problems and from the broadening networks at some point we end up with something indistinguishable from a general ai inadvertently.



What engineering development are you referring to? Do you have any examples?


Engines, planes, PVs. They all gradually became better in a way which doesn’t really map to the organic function they augment, replace or take inspiration from.

Planes don’t fly like birds. There is very little reason for what you would call a “general intelligence” ai to develop in a way which mimics our own intelligence. It might but I would find that more notable that if it did not.


I think the biggest difference between building a general intelligence and other engineering problems is that in the latter, the target function is well-defined. With planes, anything that is able to fly safely goes. Nuclear reactors, anything that generates electricity. But with intelligence... What does it need to do, exactly? The 'moving goalpost' is moving because we have no idea where it should be. We're building AI blindfolded. I don't think we're currently solving the problem, simply because we don't know what the problem is.


It's just a pattern matching glorified search engine! He screamed as his legs were converted into paperclips.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: