That's literally the only one that didn't shock me. What universe do you live in where machine learning algorithms aren't tied together with Python? And what industry where machine learning is not mainly neural networks?
In the company I work most of it is done in matlab for "putting together" then goes to C++. Again not neural networks. I'm saying all the other machine learning.
Maybe rare case.
What kind of machine learning does your company do? It's probably something simple, because for anything complicated, like computer vision, speech recognition, any non-trivial NLP, and large scale recommender systems you do need neural networks.
Autonomous driving, including, for example, computer vision. No NLP or speech.
About the "need" or neural networks, there is a lot of discussion going on.
As you may know, the "black box" nature of NNs make them a little more difficult than other means, when you have to validate them for safety critical systems.
Used yes. But not the biggest part. Maybe 10% at most.
We are doing L4, so if something goes wrong, the manufacturer is responsible. Once we were in a test car. It was behaving great. At one point, the car "makes the decision" to accelerate and pass. The manager to which we were showing asks "we did the car not waited a little bit? can we change that behavior?". A very long discussion ensued, about data sets, labeling, training... long story short: we have to be able to change behavior (even for perception) in a deterministic way. So the amount of NNs went down dramatically after that.
This is a good point, and I expect in the near future we will be able to simply ask a neural network to explain and change its behavior without retraining. I already see signs of that in prompt engineering used to interact with GPT3 or Dalle2.
It's just going to be as good as asking a human to explain its behaviour. It might be a semi-accurate interpretation of its actions based on its internal knowledge, but it's never going to be the actual thing. The actual decision making inside a neural network is fundamentally not something you could simplify into language exactly.
Yeah, it's not clear if we will be satisfied with those explanations: "I decided to slow down because I see these (seemingly random and irrelevant) objects around me", because during training on trillions of video frames the model has learned it should slow down when similar object configurations are present to reduce the chance of an accident by 0.0002%.
Even if the given reason is simple, like "I decided to slow down because the car in front of me is red", explicit override of the learned rule ("don't slow down when you see red cars") might potentially increase the chance of the accident by a lot more than 0.0002% because we are messing with the model decision making process.