But it’s not just about coding quickly but also correctly.
Coding LLMs do not solve the problem of it hallucinating, using antiquated libraries and technologies and screwing up large code bases because of the limited context size.
Given a well architected component library and set of modules I would bet that on average I could build a correct website faster.
Builder.ai didn't tell investors they were competing with GitHub Copilot, Cody, or CodeWhisperer. Those are code assistants for developers. They told investors they were building a virtual assistant for customers. This assistant was meant to "talk" to clients, gather requirements and automate parts of the build process. Very different space.
And like I said in another comment, creating a dedicated, pre-trained foundation model is expensive. Not to mention a full LLM.
Questions:
1. Did Craig Saunders, the VP of AI (and ex-Amazon), ever show investors or clients any working demo of Natasha? Or a product roadmap?
2. Was there a technical team behind Saunders capable of building such a model?
3. Was the goal really to build a domain-specific foundation model, or was that just a narrative to attract investment?
Just to clarify: I said "pre-trained foundation model".
LLMs are a type of foundation model, but not all foundation models are LLMs. What Builder.ai was building with Natasha sounded more like a domain-specific assistant, not a general-purpose LLM.
There's no way that a team of programmers can ever produce code quickly enough to mimic anything close to the response time of a coding LLM.