Prompt engineering isn’t hard to master, weeks to months tops. That is the point. LLMs are at the top of the s-curve.
Assuming prompt engineering is hard, assuming that LLMs are going to continue to make any kind of substantial leap without _any_ evidence other than blind faith, is as close to believing in a religion as it gets. Having blind “faith” in this house or cards, saying things like “when things continue to advance” without any evidence that there will be any advancement, is absolutely insane.
I’m having a hard time believing you needed me to spell that out.
Define "hard to master". I use LLMs all the time, sometimes as a writing assistant, and have never had the problem of trying to pass off a GPT response to a single one-line prompt as my own words. I haven't found LLMs hard to master.
An 80-year-old who barely uses computers and still types full sentences into Google (or who struggled for years to unlearn that habit) might find LLMs hard to master. Someone with poor written communication skills might find LLMs hard to master. Shockingly, it turns out that different people have different skills and life experiences.
I never used the word "faith". I'm not sure why you feel the need to make up a straw man to attack rather than respond to my comment as written, or why you feel the need to repeatedly insult me and accuse me of bad faith. It sounds like you're more interested in trying to win some perceived argument than engaging in constructive discussion.
It's a pretty safe assumption. Models and tooling are still improving on a regular basis, and most people haven't even used LLM chatbots, much less mastered them. Don't forget that we're talking about an extremely novel technology, which for all intents and purposes has existed for less than three years.
Do you honestly believe that the LLM tech landscape and end user competency with them will both look exactly the same in 2050 as they do in September 2025? You don't think the codebases of social media spambots will at least have become sophisticated enough to avoid copying the default writing style of a basic ChatGPT response? This is a very conservative prediction. Based on the vitriol you've been responding with, one would think I'd written that AGI was around the corner and anyone who disagreed with me was an idiot.