Hacker Newsnew | past | comments | ask | show | jobs | submit | diedyesterday's commentslogin

You obviously seem to have no idea how mafia-style dictatorships like those of Iran, Russian and Venezuela work. No fault of you. Most people don't.

And even their own citizens come to the realization after a long time living under them; partly because they get caught in the constant propaganda campaign which is one hallmark of these regimes. They always live in the propaganda mode.


Google is still far far better than the competition which is crawling with manipulated results and fake sites and phishing scams. The other are so much easier to take advantage of and manipulate (e.g. search ranking). No security based filtering ,etc.

I have been personally bitten by results on the likes of Bing or DDG (fake browser addons on top, crypto phishing sites on top, etc.)

Also user experience varies somewhat and for me with the same search prompt ("midjourney"), the intended site is the first result.


To me its function looks similar to a sponge or a tampon: An additional piece that absorbs the external influence and then is subtracted away (you remain dry:)))


Regarding the conclusion about language-invariant reasoning (conceptual universality vs. multilingual processing) it helps understanding and becomes somewhat obvious if we regard each language as just a basis of some semantic/logical/thought space in the mind (analogous to the situation in linear algebra and duality of tensors and bases).

The thoughts/ideas/concepts/scenarios are invariant states/vector/points in the (very high dimensional) space of meanings in the mind and each language is just a basis to reference/define/express/manipulate those ideas/vectors. A coordinatization of that semantic space.

Personally, I'm a multilingual person with native-level command of several languages. Many times it happens, I remember having a specific thought, but don't remember in what language it was. So I can personally sympathize with this finding of the Anthropic researchers.


So Erdogan wants to be Putin of Turkey. It's up to the Turks if he can succeed.


...until they distill that "11x efficient training" again ...


Don't forget that this model probably has far less params than o1 or even 4o. This is a compression/distillation, which means it frees up so much compute resources to build models much powerful than o1. At least this allows further scaling compute-wise (if not in the amount of, non-synthetic, source material available for training).


He seems to have won an ACM award also and there as a (possible) picture of him there. https://awards.acm.org/award-recipients/burrows_9434147


Reminds me of how Google's AlphaGo learned to play the best Go that was ever seen. And this somewhat seems a generalization of that.


Sam Altman and OpenAI are following the example of Celebrimbor it seems. And I love what may come next...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: