You're absolutely correct and it's ridiculous that HN can't fathom that a hallucination, in both the traditional and LLM sense, is not characterized by its form or content per se but by the fact that the form or content is incongruent with some external factor.
Had them as kids. Amazing animals. Intelligent, social, interactive, curious, playful(/slightly mischievous), and mostly easy to keep. The only major downside is their short lifespan.
Can't recommend them enough. I have so many fond kid memories of my pet rats and their hijinks.
Are math skills really? Most aspects of deep learning don't require a deep understanding of mathematics to understand. Backprop, convolution, attention, recurrent networks, skip connections, GANs, RL, GNNs, etc. can all be stood with only simple calculus and linear algebra.
I understand that the theoretical motivation for models is often more math-heavy, but I'm skeptical that motivations need always be mathematical in nature.
I think CNNs follow very naturally from the notion of shift/spatial invariance of visual processing. That doesn't require a mathematical understanding.
Every MLE who didnt study Math really likes to downplay its importance. Yeah you dont need measure theoretic probability, but you need a grasp of Lin Alg to structure your computations better. Remember the normalization that we do in attention ? That has a math justification. So I guess yeah academics did have a role in building LLMs.
I mean computer scientists really do like to pretend like they invented the whole field. Whereas in reality the average OS, compilers, networks class has nothing to do with core ML. But of course are also important and these barbs dont get us anywhere.
I think you might've taken my point too strongly. Of course math is very useful, and certain contributions are purely mathematical. I just don't think it is as hard of a requirement for innovation as was claimed.