I think you are writing your comments conditioned on not just what you are responding to but also a lot of internal assumptions about their intentions. The person you are responding to said or implied nothing about surveillance or Western assumptions about China. They are making the claim (apologies to them if I am misrepresenting) that societies or governments achieve extraordinary goals (i.e. goals that they were not expected to achieve within a certain time-frame) because of the physical, economic and social conditions and not because of cultural elements. Cultural explanations are post-hoc i.e. they are used after the fact to boost morale or give a sense of unity. More concretely, if China, the US, the EU, Japan, India, Russia can launch spacecrafts to the moon, so can Nigeria and Kenya given enough time, resources and the right incentive structure even if they are culturally very different from the countries above.
I feel the American media and general public has gotten psyched by the recent announcements coming from China. This might be new weapons or Deepseek or conveniences in the cities. I wish the Chinese people all the best and sincerely hope they prosper. At the same time, I really wish Americans would get out of this panic mode. You answered several challenges over the last 250 years. Some were existential threats. America generally lays out its problems for everyone to see. Americans tend to be extremely self-critical. This is often misconstrued by some to be signs of weakness. Anyone who believes this is, in my opinion, delusional.
As an aside, there are some comments about the "Chinese way of thinking" and the "American way of thinking". I generally think these discussions veer off into notions of cultural superiority. That, also in my opinion, is the mark of weak minds. The fact is once something is shown to be possible, it is exponentially easier to duplicate and improve it. America did this with German technology, China did it with American technology and I am sure countries like India are going to quickly get there too (I am not suggesting the Germans didn't learn from others themselves). This sets a firm base for iterative improvements.
To riff off another comment, China's progress wasn't done by God. America will learn and adopt what's valuable and discard what's not. If I have learned anything about Americans (of all backgrounds), they don't shy away from a challenge. For all its faults, I still personally will root for a society based on something like the American constitution.
Even with thick desires, I sometimes find myself day-dreaming about the state when I have mastered a skill or understood a topic deeply. At the same time, I know from experience that the process never ends. Even when one does master a skill, one is deeply aware of what one doesn't know or understand or what one is not good at within that domain.
What helps me is to focus on today. If I can spend even an hour on a topic and get lost in it or even get frustrated by it, it is time well-spent. I was going to say "it is progress" instead of "time well-spent" but even that's a trap. Progress implies moving forward in a preferred direction. While I can't say I don't want to make progress, I am training myself to care less about it. It is really the time spent engaging that's most valuable (at least to me).
Oh yeah decades in I still feel I know f-all about programming. Doesn't help the field keeps expanding expintentially. E.g. I look most things up. I am basicially a slow LLM!
You're kind of the opposite of a slow LLM. LLMs don't look anything up, they enthusiastically assert that they're correct. They have no desire to know anything.
according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time. Seems to me the problems of LLMs boldly producing lies are far from solved. But sure, they lied years ago too.
according to openai, the least likely model to hallucinate is gpt-5-thinking-mini, and it hallucinates 26% of the time.
You're not so bad at hallucinating, yourself. We find
that gpt-5-main has a hallucination rate (i.e., percentage of factual claims that contain minor or major errors) 26% smaller than GPT-4o ...
That's the only reference to "26%" that I see in the model card.
I get that in the age of AI, you didn't want to read the data i linked; that's fine. your ctrl-f search found a reference to 26%. However, on page thirteen, the rate is described as 0.26; I interpreted that as 26% because it's cross referenced in the blog post that i also linked.
Posting multiple links and asserting that somewhere within one of them the reader will find confirmation of an apparently-absurd statement amounts to an attempted DoS attack on the reader's attention. It's not a sign of good faith. Obviously a model that hallucinates 26% of the time on typical tasks would be of no interest to anyone outside a research environment, so regardless of where the real story is found, it's safe to say it's in there somewhere. It's just not my job to look for it.
On some classes of queries, weak models will hallucinate closer to 100% of the time. One of my favorite informal benchmarks is to throw a metaphorical dart at a map and ask what's special about the smallest town nearby. That's a good trick if you want to observe genuine progress being made in the latest models.
On other tasks, typically the ones that matter, hallucination rates are approaching zero. Not quickly enough for my preference, but the direction is clear enough.
If you didn't daydream like that would you have the motivation to pursue it? Are not those daydreams your kind encouraging you? "Look how great it'll be, this is why you'll put in the hard work now". You can get trapped in the dreams, of course, but they're useful too
This means that the results (both of the task and of the learning by the student) are lower if the student uses an LLM first, but slightly improves if they use it second
I came to the US for college from Asia to study physics (and mathematics). I actually came to study astronomy because I found it fascinating but didn't really like physics or math. My first physics encounter in college here transformed my life. There was no memorization. Instead, we had short quizzes in each class (first 5 min), weekly individual assignments, weekly group assignments (two students each), four "midterms" where one could get densely written "cheat-sheets" as well as weekly physics lab that often went on far beyond the time slot.
In high school, physics was mostly based on memorization. There were a few problems but all based on some patterns. None made you think extremely hard.
I also found that many American students (who were extremely good in my experience) seemed to have a much better practical sense.
One of the key steps in the development of a physicist is the transition from solving textbook problems to creating your own problems. In essence, the skill one learns in graduate school is defining/crafting problems that are solvable and interesting. The primordial phase starts in college as one is solving many problems. Initially, the new problems are straightforward extensions of existing ones (e.g. add an air resistance term for parabolic motion). Eventually, one (hopefully) develops good taste and essentially is doing research.
Interestingly, I also find very different attitudes to physics in the west (at least in the US) and other parts of the world. In US universities, physics is still seen in glowing terms. In many other places, physics is what you study if you couldn't do engineering. Young people (well, all people) are impressionable and this subtle bias affects what kind of students end up studying the subject.
> physics is what you study if you couldn't do engineering
This reminded me of something from my alma mater.
At my (Canadian) university, there was a running joke that engineering was what you studied if you couldn't get into computer science. In fact, the Engineering and Computer Science faculties would semi-frequently prank each other because they were next to each other, I guess. Each faculty focuses on different things, of course, but the "running joke" was that engineering courses were just easier, not as rigorous, and therefore getting in engineering was seen as easier (and so they had more time to do such elaborate pranks).
Again, I don't think this had any truth to it, but it was just one part of a fun tradition the university had.
Also, this was a long time ago. I'm not sure what the current state of this is now or if it even still exists.
> physics is what you study if you couldn't do engineering
Wdym "couldn't do"? Nobody here is studying Physics for the job opportunities but I'd say everybody who makes it past semester 4 genuinely loves Physics otherwise they'd be studying something easier.
As one example, I met quite a few graduate students from Indian Institutes of Technology (IITs) who ranked high enough in the entrance tests to study computer science/engineering or electrical engineering but really wanted to study physics. They all had significant pressure from their parents to choose the engineering branch and had to fit in physics electives where they could. My understanding is that the priority list was:
computer science/engineering > electrical engineering > mechanical engineering > ... > things like metallurgical engineering > ... > physics (and maybe other sciences)
Some of this is driven by job prospects while some of it is prestige driven because one's major lets one infer one's rough ranking in the entrance tests.
So it's very common to infer that if you weren't studying engineering, you didn't rank very high and barely made it past the cutoff ranks and had to study physics or metallurgical engineering.
When I was younger, I thought these rank-based systems (very common in Asian countries) are better than the fuzzier American system of grades + extracurriculars + reference letters. But my opinion is the opposite now. As soon as ranks are involved, a notion of prestige gets assigned. Once prestige is involved, people will climb over each other to get through the doors and suppress their instincts to earn social credits. I have seen enough people who are successful by traditional metrics but are miserable because they didn't spend time pursuing their interests (modulo concerns about jobs and money).
Edit: I'll add that my IIT friends were generally extremely bright, curious, creative and generally wonderful to work with. But they also had a competitive streak which could turn counter-productive. Against their own better instincts, they sometimes got locked into a path where outcomes could be measured vs exploring areas less traveled. If they saw a topic or area that attracted top minds (e.g. see AI at frontier labs today), they felt pulled in that direction because "that's where the smart people were going and they themselves were smart and therefore, should go into the arena". This is true of Asian Americans in general. After all, that's why there was an uproar that students with perfect SATs and GPAs of 4+ (5?! i.e. A++ grades) were sometimes getting rejected by Harvard. I agree with Harvard in this case. One doesn't want cookie cutter/prescriptive paths into top universities. Instead, there should be some randomness as long as students meet some decent baselines. I don't mean race-based or group-based selection. Just really random selection at least for a small fraction of students.
Renting can be much better financially than buying.
Edit: all % numbers are per year
Consider the case of condos in cities. If you were to buy outright, you effectively get a return by not paying rent (i.e. paying yourself rent). Rent is usually ~5% of the condo cost. HOA + property taxes is 2-3% so subtract that from the rent return i.e. net return 2-3% (5-2/3%). The rest of the return is appreciation from the underlying real estate prices. I am excluding maintenance costs because they are negligible in condos.
On the other hand, if you rent and put the entire amount (that you would have paid to buy the condo), you get ~10% per year. To break even between the two scenarios, you would need real estate prices to grow 7-8% (2-3% + 8-7% = 10%).
Beyond this, there are psychological reasons to buy vs rent. Buying - ability to customize the space, peace of mind because of perceived stability etc. Renting - flexibility, peace of mind because of no long-term obligations etc.
A mortgage is an interpolation of the two cases at the cost of the interest one pays. It is noteworthy, at least in the US, that for most people, this is the only time they can borrow several hundreds of thousands at relatively low costs.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
> The fundamentals actually haven't changed that much in the last 3 years
Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.
Brute force!? Language modeling is a factorial time and memory problem. Someone comes up with a successful method that’s quadratic in the input sequence length and you’re complaining…?
Dario wishes he was the grifter Altman is. He's like a kirland brand grifter compared to Altman. Altman is a generational level talent when it comes to grifting.
> I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
the AI just an LLM and it just does what it is told to.
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
its just the old pc vs mac cultism. nobody who actually has work to do cares. much like authors obsessed with typewriters, transport companies with auto brands, etc
I find software engineers spend too much time focused on notation. Maybe they are right to do so and notation definitely can be helpful or a hindrance, but the goal of any mathematical field is understanding. It's not even to prove theorems. Proving theorems is useful (a) because it identifies what is true and under what circumstances, and (b) the act of proving forces one to build a deep understanding of the phenomenon under study. This requires looking at examples, making a hypothesis more specific or sometimes more general, using formal arguments, geometrical arguments, studying algebraic structures, basically anything that leads to better understanding. Ideally, one understands a subject so well that notation basically doesn't matter. In a sense, the really key ingredient are the definitions because the objects are chosen carefully to be interesting but workable.
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)
Understanding new math is hard, and a lot of people don't have a deep understanding of the math they use. Good notation has a lot of understanding already built-in, and that makes math easier to use in certain ways, but maybe harder to understand in other ways. If you understand something well enough, you are either not troubled by the notation, because you are translating it automatically into your internal representation, or you might adapt the notation to something that better suits your particular use case.
Notation makes a huge difference. I mean, have you TRIED to do arithmetic with Roman numerals?
>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
I haven't thought about or learned a systematic way to add roman numerals. But, I would argue that the difference is not notation but a fundamental conceptual advance of representing quantities by b (base) objects where each position advances by a power of b and the base objects let one increment by 1. The notation itself doesn't really make a difference. We could call X=1, M=2, C=3, V=4 and so on.
I also don't know what historically motivated the development of this system (the Indian system). Why did the Romans not think of it? What problems were the Indians solving? What was the evolution of ideas that led to the final system that still endures today?
I don't mean to underplay the importance of notation. But good notation is backed by a meaningfully different way of looking at things.
Adding and subtracting Roman numerals is pretty easy because it's all addition and subtraction. A lot of it is just repeating the symbols just like with tally marks. X+X is just XX for example. You do have to keep track of when another symbol is appropriate, but VIIII is technically equivalent to IX. It's all the other operations that get harder. If the Romans had negative numbers, then the digits of a numeral could be viewed as some kind of polynomial with some positive and negative coefficients. But they also didn't have that.
>The notation itself doesn't really make a difference. We could call X=1, M=2, C=3, V=4 and so on.
Technically, the positional representation is part of the notation as well as the symbols used. Symbols had to evolve to be more legible. For example, you don't want to mix up 1 and 7, or some other pairs that were once easily confused.
>Why did the Romans not think of it?
I don't know. I expect that not having a symbol for zero was part of it. Place value systems would be very cumbersome without that. I think that numbers have some religious significance to the Hindus, with their so-called Vedic math, but the West had Pythagoras. I'm sure that the West would have eventually figured it out, as they figured out many impressive things even without modern numerals.
>But good notation is backed by a meaningfully different way of looking at things.
That's just one aspect of good notation. Not every different way of looking at things is equally useful. Notation should facilitate or at least not get in the way of all the things we need to do the most. The actual symbols we use are important visually. A single letter might not be self-describing, but it is exactly the right kind of symbol to express long formulas and equations with a fairly small number of quantities. You can see more "objects" in front of you at once and can mechanically operate on them without silently reading their meaning. On the other hand, a single letter symbol in a large computer program can be confusing and also makes editing the code more complicated.
Considering that post-arithmetic math rarely use numbers at all, and even ancient Greeks use lots of lines and angles instead of numbers, I don't think Roman numerals would really hold math that much.
> One thing mathematics education is really bad at is motivating the definitions.
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
> If the idea is that the right notation will make getting insights easier, that's a futile path to go down on.
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
> What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They do.
The purpose of papers is to teach working mathematicians who are already deeply into the subject something novel. So of course only novel or uncommon notation is introduced in papers.
Systematic textbooks, on the other hand, nearly always introduce a lot of notation and background knowledge that is necessary for the respective audience. As every reader of such textbooks knows, this can easily be dozens or often even hundreds of pages (the (in)famous Introduction chapter).
> What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They already do this. That is how we all learn notation. Not sure what you mean by numerically though, a lot of concepts cannot be defined numerically.
Math rarely emphasize on this. You either have talent and you get intuition for free or you're average and you swim as much as you can until the next floater. It's sad because the internal and external value is immense
* the vast majority (including me) are not really very intelligent. We have a lot of "state" that's transferred from generation to generation. Once in a while, a very small percentage of people make advances and they filter through society and improves (or maybe just changes) the state. We collectively gives humans credit for these improvements but it's not the species but those specific people who created that jump in capabilities.
* this notion of inherited pride or inherited achievement is very common. This leads to being proud of membership in a group (country, religion, tribe, corporation, university etc.) and also of instinctively rejecting ideas put forth by others (e.g. see the amount of derision vegetarians and especially vegans attract).
* achievement/progress is also time-scale dependent. While we get smug about our progress, if it ends up destroying the one planet we have, it will be incredibly stupid. Humans fundamentally are not capable of thinking long-term.
Everything around me was not made by me. I don't even understand how I would potentially make most of these from scratch without using machines made by other people or knowledge acquired over time (see first bullet above). Within the framework provided to me, I can convince myself to reason and act but the framework itself is my operating system. Of course, I like to think I am intelligent and reasoning but it's all in a box. I feel this describes almost everyone I know except for a few outstanding scientists I have worked with.
"Society does not consist of individuals but expresses the sum of interrelations, the relations within which these individuals stand." - K. Marx
This world knowledge is built upon piece by piece, the conceptual tools of the past create the conceptual tools of the future, that line is drawn through books and projected through minds, again onto books. This whole society depends deeply on cohesion and cultural continuation.
Our intellectual thread is the cultural knowledge and technological progress itself, its not even down to great individuals alone. I think believing in great individuals is a product of a sort of personality-fetishism (though individuals can do great things, if that makes sense).
This fetishism or mystification of the person who contributes I view as a product of an old frame of thinking which is called philosophical liberalism. This framework does this because it posits that all peoples exist under equal social value (political, legal and economical), thus people who contribute more must have a greater capacity that is innate and unexplainable or untraceable; inherent. Its a widespread philosophical frame of thought that does not consider the conditions of the individual.
We most see this employed with rich people. We hear they are truly great, savvy, exceptional individuals, when in reality a lot of the times the explanation for the vast majority of the rich is that they had rich parents. Where would you be if your parents owned an emerald mine? or Where would you be if your parents gave you a small loan of a million dollars?
In the same vein this human progress that we encounter, which seems to be carried on the backs of the Newtons and Einsteins of the world, is in fact a steady drip-feed of collective human knowledge that gets compiled and analyzed, made consistent and expounded upon by a few persons every certain amount of time. No lesser of a feat, mind you, the work is still there. I am not minimizing these persons, but contextualizing them.
[Insert the "on shoulders of giants" quote here]. Is a great example of humility and awareness by a visionary.
One thing I find impressive at times is the vast amount of German intellectuals throughout history, which upon looking at history can be explained by their colonial exploits leading to greater national wealth, leisure, and cultural amplification. This is often the case with Europe and the USA as well.
So there is a chance that we are all base-level intelligence, since we are all essentially the same species. What changes that is access to the cultural wealth of information, and not only access to this cultural wealth of information but a CULTURE OF ACCESS to that wealth of info. A level of social development around you that enables you.
People would rather immediately jump to physiological and even genetic explanations of intelligence rather than look at the social context of the individuals involved. This is because of the flaws of philosophical liberalism at contextualizing and actually scientifically looking at the world around us.
Again: there's a good chance that we are all just base level intelligence. What we know is actually different between us is the preparation and economic/social context of the individuals.
>>You can’t avoid the reality that’s your life depends on something else dying. Either plant insect or animal
There are more nuanced ways of thinking about this. A good example is Jainism's version of vegetarianism which requires paying attention to what one consumes.
"Jains make considerable efforts not to injure plants in everyday life as far as possible. Jains accept such violence only in as much as it is indispensable for human survival, and there are special instructions for preventing unnecessary violence against plants."
I keep seeing the argument that non-returners are creating jobs. Does that mean they also throw their trash on the streets so cleaners have to be hired? Should one randomly break into houses so every people need to hire security guards? How about scratching cars on the street so mechanics and painters have some extra work?
I can understand the argument insofar as it's necessary for the business to want it clean, so the result is not a tragedy of the commons.
I knew a special needs teacher when I was younger who would make this argument, but it was specifically because he worked with kids who had a history of crime, as so the argument was specifically that we need jobs for people who can't be trusted with either money or food. He would suggest not cleaning up after yourself at a fast food restaurant specifically because trash/bussing was effectively the only jobs these kids and young adults could get.
I never felt convinced that this was an effective strategy, I follow the logic, but within the logic is the assumption that putting these kids to work is a better outcome than using that capital to try and improve their situation/behavior. I honestly don't know which choice is more optimal or whether people can really change en masse.
I also don't want a risk of my car being dented. So there's some tragedy there if it was truly rampant.
>within the logic is the assumption that putting these kids to work is a better outcome than using that capital to try and improve their situation/behavior.
Sounds like your teacher would be great in politics. "creates jobs" is much easier sell than reformation.
I'd rather fix our broken windows than pretend that some people are "destined" to stay as minimum wage workers instead of aspiring to their passions. But those can't be done in a single term.