This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).
Complete tangent, but, for me, this is where AI shines. I've been able to find things I had been looking for for years. AI is good at understanding something "continued fraction" instead of "infinite series", especially if you provide a bit of context.
Absolutely. In fact my post above originally said "infinite series" instead of "continued fraction", but Googling again, Google AI did mention "continued fraction" in its summary, so I edited my post and tried searching on that which led me to the solution!
100% agree. It’s great if you have a clear sense of what you’re looking for but maybe have muddled the actual terminology. You can find words, concepts, books, movies, etc, that you haven’t remembered the name of for years.
Every time I fly, I marvel at how much engineering and know-how went into making the airport that I'm using. From the oddly shaped trucks with various functions, to mundane elements (elevators, escalators, ...) to advanced technology (radio communication, radars...) to the sheer organizational feat (thousands of people coming in every day to execute their carefully planned tasks). This text will give me one more thing to think about :)
Same, and I also just marvel at the airplanes. This video made me think of the several grass runways that are in my area. They're literally just maintained by some guy mowing them, and yet people land on them in tiny planes as well as two-engine aircraft.
I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
I'd say the new problem is knowing when understanding is important and where it's okay to delegate.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.
We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.
Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.
I'm writing a blog post on this very thing actually.
Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.
Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.
The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.
When you say perfectly aligned, what kind of precision are we talking about? If we aimed a receiver at a nearby star, would we be able to achieve this kind of precision?
I don't know who wrote the title for this submission, but adding a question mark that is not in the linked article seems like a terrible editorial decision.
Agree, its editorialising and not allowed under the guidelines here (unless it was in the original and that was changed), but given the uselessness of the field you could argue that any "String Theory" claim in any title should have an automatic question mark (or perhaps several) attached afterwards.
It's not useless, though. String theory can be a fad (or "difficult to prove", per Witten) but some of the mathematics used in its research or "trying to prove it" have been used in other fields.
If you want to bash badly-spent potential look at people doing cutting edge ad research and optimization, or HFT. This is at least good base research that others can build on.
Fair point, but waste in one domain should not be used to excuse waste elsewhere. Unless your argument is that it's generally hard for human societies to know where to best invest their scientific talent without the benefit of hindsight.
Think of it as a playground for the exercise and training of a pool of minds that will one day either make the glove fit or kick the sand castle over replacing it with a better mousetrap.
Too many metaphors? Hmmm, maybe fold in some dimensional reduction somehow.
I agree plus ST takes a person who would have researched somewhere else. The Googler or Jane Street or guy who decides to travel the world in the canoe have different reasons and probably would need way more persuading to be in academia.
String theory has generated a lot of hype over the years, but never delivered anything. Looks to me like they are all the negatives you hate about ad research.
Hm, string theory can describe a lot of things, but it's not testable with current technology. I'm pretty sure that other mathematical constructs exist that could also describe a similar set of properties, but we just happened to stumble upon string theory first, and got enamored with some of the nice properties it had initially.
High-end SW who advertise often give prices of $500-$800/hr; like any "luxury service", there are undoubtedly those who amplify their desirability by not advertising and charging even more.
But $10,000 for 15 minutes is a pretty outlandish scenario. It might have played out somewhere in history, sure.
I wouldn't be surprised if they interacted with a Saudi prince or something who threw $10k at them for 15 minutes as a power play, but that's likely a one-off kind of thing