> directly moving towards an improved model via linear regression is more efficient than randomly changing your model and then running a natural selection simulation to improve fitness.
That is objectively true, but don't underestimate how much of that process is simulated by the way we train our models. The natural selection bit never was natural to begin with (it's obviously artificial), and is the rough equivalent of the final step in training a model: verification on unseen data. If the model performs worse compared to a previous one then it is discarded!
Evolutionary algorithms are somewhat interesting because they can come up with weird stuff that works anyway, that random element can result in entirely novel approaches (to the point that we have a hard time to understand what is going on) and that's something that I have not seen with neural nets.
That is objectively true, but don't underestimate how much of that process is simulated by the way we train our models. The natural selection bit never was natural to begin with (it's obviously artificial), and is the rough equivalent of the final step in training a model: verification on unseen data. If the model performs worse compared to a previous one then it is discarded!
Evolutionary algorithms are somewhat interesting because they can come up with weird stuff that works anyway, that random element can result in entirely novel approaches (to the point that we have a hard time to understand what is going on) and that's something that I have not seen with neural nets.
There are some interesting hybrids:
https://www.sciencedirect.com/science/article/abs/pii/S09521...