Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would pose a question differently, under his leadership did Meta achieve good outcome?

If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

If the answer is no, then nothing to discuss here.



Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.

If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."

But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.


Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.


Philosophers are usually more aware of their not knowing than you seem to give them credit for. (And oracles are famously vague, too).


Do you know that all formally trained researchers have Doctor of Philosophy or PhD to their name? [1]

[1] Doctor of Philosophy:

https://en.wikipedia.org/wiki/Doctor_of_Philosophy


If academia is in question, then so are their titles. When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).


he probably predicted the asymptote everyone is approaching right now


So did I after trying llama/Meta AI


He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.

I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.


If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.


> If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.

Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.


How is it a foregone conclusion that squeezing the stone will continue to produce blood?


I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.


Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.


is an american model chinese because chinese people were in the team?


There is no need for that tone here.


OP edited the post.


What are these chinese labs made of?


500 remote indian workers (/s)


most papers are also written in the same language, what's your point?


LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.


Wasn't the original LLaMA developed by FAIR Paris?


I hadn't heard that, but he was heavily involved in a cancelled project called Galactica that was an LLM for scientific knowledge.


Yeah that stuff generated embarrassingly wrong scientific 'facts' and citations.

That kind of hallucination is somewhat acceptable for something marketed as a chatbot, less so for an assistant helping you with scientific knowledge and research.


I thought it was weird at the time how much hate Galactica got for its hallucinations compared to hallucinations of competing models. I get your point and it partially explains things. But it's not a fully satisfying explanation.


I guess another aspect is - being too early is not too different from being wrong.


then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: