Maybe the skill could have references to the code. Like if everything else fails it can look at the implementation.
Intuitively it feels like if you need to look at the implementation to understand the library then the library is probably not well documented/structured.
I think the ability to look into the code should exist but shouldn't be necessary for the majority of use cases
They were just giving that as an example that Zed's inline suggestions aren't very good for basic tasks. There are hundreds of othersmall tasks like this that can't be handled by the language server.
Yes, elsewhere in this thread someone is complaining about lousy C# language server performance relative to IDEs. These swiss-army-knife programmer's editors will always be at a semantic language tooling disadvantage relative to IDEs.
I know that days of yak-shaving with LSP and emacs only gets me to a janky imitation of Visual Studio/XCode semantic search on my C++ work codebase. "Fuck it, let an LLM auto-complete based on vibes" has some appeal when you just get sick of trying to arm-wrestle clangd into ... whatever XCode or Visual Studio are doing to have functional semantic search across the project.
Although I have to say LLMs were a disaster at vibe-auto-completing in VSCode. So I mostly stick with semantic search in the IDE and editing in emacs like I always have.
With AI Overviews and more recently AI Mode, people are able to ask questions they could never ask before. And the response has been tremendous: Our data shows people are happier with the experience and are searching more than ever as they discover what Search can do now.
I'd say commit a comprehensive testing system with the prompts.
Prompts are in a sense what higher level programming languages were to assembly. Sure there is a crucial difference which is reproducibility. I could try and write down my thoughts why I think in the long run it won't be so problematic. I could be wrong of course.
I run https://pollinations.ai which servers over 4 million monthly active users quite reliably. It is mostly coded with AI. Since about a year there was no significant human commit. You can check the codebase. It's messy but not more messy than my codebases were pre-LLMs.
I think prompts + tests in code will be the medium-term solution. Humans will be spending more time testing different architecture ideas and be involved in reviewing and larger changes that involve significant changes to the tests.
Agreed with the medium-term solution. I wish I put some more detail into that part of the post, I have more thoughts on it but didn't want to stray too far off topic.
I don't think individual examples make sense to solve these kinds of discussions as for me it can vary easily by 6x thinking with exactly the same input and parameters.
Intuitively it feels like if you need to look at the implementation to understand the library then the library is probably not well documented/structured.
I think the ability to look into the code should exist but shouldn't be necessary for the majority of use cases