The early research in neural networks was hampered by proofs that perceptrons could never solve certain functions, like XOR. DNNs could have been developed much sooner otherwise. I view these proof papers with some skepticism, since they can be unnecessarily dismissive of good ideas.
>The early research in neural networks was hampered by proofs that perceptrons could never solve certain functions, like XOR. DNNs could have been developed much sooner otherwise.
These proofs still hold; pure MLPs (without a modern activation function) aren't very useful, both in theory and practice. What made them useful was the realisation that combining them with a proper activation function makes them much more useful, both theoretically and practically; this discovery took time.
I’ve only read the abstract. It says that they have experiments to back their claims. So, that’s a proof claim with experimental data. It’s in a field where most learning happens by experimental exploration, too.
I don’t think it will hold us back. If anything, it’s very exciting to see how many people in the ML field are challenging the status quo from many, different angles.