- Instead of trying to get LLMs to answer user questions, write better FAQs informed by reviewing tickets submitted by customers
- Instead of RAG for anything involving business data, have some DBA write a bunch of reports that answer specific business questions
- Instead of putting some copilot chat into tools and telling users to ask it to e.g. "explain recent sales trends", make task-focused wizards and visualizations so users can answer these with hard numbers
- Instead of generating code with LLMs, write more expressive frameworks and libraries that don't require so much plumbing and boilerplate
Of course, maybe there is something I am missing, but these are just my personal observations!
I agree, however, I've seen first hand how the AI fever and mandate from the top has finally busted enough information silos that previously 'have some DBA write a bunch of reports that answer specific business questions' just wasn't feasible in the first place, and now is.
Yeah, I will say it provides a good excuse to leadership & external stakeholders for fixing technical debt and working cross-functionally! Making infrastructure "AI-ready" and all that.
I imagine more of the benefit of AI will come from companies prioritizing infrastructure and effective organization than from the technology itself.
With all due respect, all of those examples are the examples of "yesterday" ... that's how we have been bringing money to businesses for decades, no? Today we have AI models that can already do as good, almost as good, or even better than the average human in many many tasks, including the ones you mentioned.
Businesses are incentivized to be more productive and cost-effective since they are solely profit-driven so they naturally see this as an opportunity to make more money by hiring less people while keeping the amount of work done roughly the same or even more.
So "classical" approach to many of the problems is I think the thing of a past already.
> Today we have AI models that can already do as good, almost as good, or even better than the average human in many many tasks, including the ones you mentioned.
We really don't. There are demos that look cool onstage, but there is a big difference between "in store good" and "at home good" in the sense that products aren't living up to their marketing during actual use.
IMO there is a lot of room to grow within the traditional approaches of "yesterday" - The problem is that large orgs get bogged down in legacy + bureaucracy, and most startups don't understand the business problems well enough to make a better solution. And I don't think that there is any technical silver bullet that can solve either of these problems (AI or otherwise)
I am wondering how often do you use AI models? Because I do it on a daily basis, and as much as they have limitations, I find them to be performing incredibly well. It's far very far from being a demo - last time it was a demo that looked "cool" was around 2020/21 when they were cool for spitting out the haiku poetry, and perhaps 2022 when capabilities were not as good. But today? Completely mind-blowing.
If you're not convinced, I suggest you to search for the law firms, hospitals, and laboratories ... all of which are using AI models as of today to do both the research and boiler-plate work. Creative industries are being literally erased by the generative AI as we are speaking. What will happen with the Photoshop and other similar tools when I can create whatever I want using the free AI model in literally 2 seconds without prior knowledge? What will happen with majority of movie effect makers when single guy will be able to do the work of 5 people at the same time? Or interior designers? The heck, what will happen with the Google search - I anticipate nobody will be using it in a year or two. I already don't because it's a massive sink of time compared to what I can do with perplexity for example.
There's many many examples. You just need to have your mind open to see it.
You're making a ridiculously overconfident statement.
* Show me a discrete manufacturing company using AI models for statistical process control or quality reporting
* Show me a pharmaceutical company using AI models for safety data analysis
* Show me an engineering company using AI models for structural design
The list goes on and on. There are precious few industries or companies that have replaced traditional analysis & prediction with AI. Why? Because one of two things are true: 1) their data is already in highly structured relational stores that have long legacies of SQL-based extraction and analysis, 2) they're in regulated industries and have to have audit-proof, explainable reporting, or 3) they need evidence-based design and analysis that has a key component coming from real people observing real processes in action.
For all the hyped "AI Automation" you read about, there are 100 other things that aren't, or where firms don't believe they can be, or where they'll struggle to for [reasons].
Right right, I get it. Pharma, structural engineering, discrete manufacturing, ..., all of the industries which are "too hard" to be conquered by some stupid statistical parrot. You're being delusional my friend but I am not going to be the one trying to persuade you to believe otherwise. I am here for sharing experiences and interesting discussions from which I can learn and I am not here for combating triggered and defensive strangers on the internet. And FWIW both of your conclusion and premise, and interpretation of my comment is wrong.
I try them from time to time, but I have yet to see AI models produce a useful output for my work. The problems I work on are not well-represented in training data and internet-based resources, and correctness matters far more than speed.
For my work, it's important to form strong and correct mental models of complex systems so I can reason about them well. It's more about thinking and writing clearly than anything else. LLMs tend to include subtle mistakes or even completely incorrect information (and reasoning!) which disrupts this process.
On the creative industry side...well, you can produce some results that look fine by themselves, but producing large-scale cohesive artwork (games, movies, etc.)? It's mostly human elbow-grease for the foreseeable future.
> LLMs tend to include subtle mistakes or even completely incorrect information (and reasoning!) which disrupts this process.
You see, so many humans do that as well but yet we make it believe that LLMs are somehow special here. Yes, they make mistakes, we make mistakes, I make mistakes, your colleague makes mistakes, but that's not the point of this discussion at all and it's a form of confirmation bias.
Why this reflex gag happens, I think, is because people are biased to catch somebody in making the mistake so that they can feel superior and irreplaceable. We do that to keep our position strong (in society, work), and this is completely natural - it's called the survival instinct, and it is present in our species regardless of LLMs. LLMs are just one of the ways we can obviously trigger this condition.
So, your response is no more special than other people combating the AI but take into account that "The problems I work on are not well-represented in training data" or "correctness matters far more than speed" or "it's important to form strong and correct mental models of complex systems so I can reason about them well" makes a great deal of strong assumptions. Almost any domain can literally copy-paste this into their defense, and this is also something interesting I observed through the course of years - every domain I worked in, and there were plenty, each domain thought that it is their domain which is the "hardest". Vanity.
High quality writing (e.g. MDN Web Docs, Go Documentation, internal docs) does not have a tendency to include mistakes or incorrect reasoning because it comes from a place of clear thinking and goes through a continuous process of peer review and improvement.
LLMs outputs are, at best, first drafts that have not been reviewed or revised, and they certainly do not have analytical reasoning behind their content.
I am not claiming that my domain is uniquely challenging, but the statements I made are factual for my work (and likely many adjacent fields, too).
Would you trust your life to a pacemaker or aircraft control system designed and manufactured quickly by people without a correct understanding of what they are developing, working purely off of information from the Internet and other semi-public sources? I wouldn't. But maybe you are braver than me!
Nobody here is talking about the pacemakers nor am I suggesting that AI will be replacing 100% of working force.
While we are at writing documentation, did you hear about the latest layoffs from MySQL which, among others, impacted heavily the documentation team as well cutting their size from 8 to 3 people?
I mean, the software I develop is the system of record for the engineering and development of medical devices (and aerospace, automotive, industrial, etc. systems). So pacemakers and aircraft are relevant examples.
I hadn't heard about the MySQL layoffs. I hope the affected people find good new opportunities.
I tried to give a counter-example of people being replaced in an extremely complex piece of software as we speak. This may or may not be related to AI but I am inclined to think that there is some correlation - yesterday 8 people in the team, today 3, tomorrow who knows, maybe only a single guy.
- Instead of trying to get LLMs to answer user questions, write better FAQs informed by reviewing tickets submitted by customers
- Instead of RAG for anything involving business data, have some DBA write a bunch of reports that answer specific business questions
- Instead of putting some copilot chat into tools and telling users to ask it to e.g. "explain recent sales trends", make task-focused wizards and visualizations so users can answer these with hard numbers
- Instead of generating code with LLMs, write more expressive frameworks and libraries that don't require so much plumbing and boilerplate
Of course, maybe there is something I am missing, but these are just my personal observations!