Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The early research in neural networks was hampered by proofs that perceptrons could never solve certain functions, like XOR. DNNs could have been developed much sooner otherwise. I view these proof papers with some skepticism, since they can be unnecessarily dismissive of good ideas.


>The early research in neural networks was hampered by proofs that perceptrons could never solve certain functions, like XOR. DNNs could have been developed much sooner otherwise.

These proofs still hold; pure MLPs (without a modern activation function) aren't very useful, both in theory and practice. What made them useful was the realisation that combining them with a proper activation function makes them much more useful, both theoretically and practically; this discovery took time.


How did papers that demonstrated single layer neural networks can't do XOR, but multi layer neural networks can, hamper development?

XOR is simply not linearly serperatable, requiring an MLP or kernel trick still holds.

It is a similar reason that attention works for majority gates but not parity gates in the general case.

Acknowledging that reality resulted in new developments, but is still a limitation.

Perceptrons are binary classifiers.


I’ve only read the abstract. It says that they have experiments to back their claims. So, that’s a proof claim with experimental data. It’s in a field where most learning happens by experimental exploration, too.

I don’t think it will hold us back. If anything, it’s very exciting to see how many people in the ML field are challenging the status quo from many, different angles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: