My personal project output has gone up dramatically since I started using AI, because I can now use times of night where I'm otherwise too mentally tired, to work with AI to crank through a first draft of a change that I can then iterate on later. This has allowed me to start actually implementing side projects that I've had ideas about for years and build software for myself in a way I never could previously (at least not since I had kids).
I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.
I had a dozen domains and projects on the shelf for years and now 8 of them have significant active development. I've already deployed 2 sites to production. My github activity is lighting up like a Christmas tree.
i find a lot of value in using it to give half baked ideas momentum. some sort of "shower thought" will occur to me for a personal project while im at work and ill prompt Claude code to analyze and demonstrate an implementation for review later
on the other hand i believe my coworker may have taken it too far. it seems like productivity has significantly slipped. in my perception the approaches hes using are convoluted and have no useful outcome. im almost worried about him because his descriptions of what hes doing make no sense to me or my teammates. hes spending a lot of time on it. im considering telling him to chill out but who knows, maybe im just not as advanced a user as him? anyone have experience with this?
it started as an approach to a mass legacy code migration. sound idea with potential to save time. i followed along and understood his markdown and agent stuff for analyzing and porting legacy code
i reviewed results which apply to my projects. results were mixed bag but i think it saved some time overall. but now i dont get where hes going with his ai aspirations
my best attempt to understand is he wants to work entirely though chats, no writing code, and hes doing so by improving agents through chats. hes really swept up in the entire concept. i consider myself optimistic about ai but his enthusiasm feels misplaced
its to the point where his work is slipping and management is asking him where his results are. were a small team and management isnt savvy enough to see hes getting NOTHING done and i wont sell him out. however if this is a known delusional pattern id like to address it and point to a definition and/or past cases so he can recognize the pattern and avoid trouble
I really haven't tried that stuff myself except for claude code
but I do recall seeing some Amazon engineer who worked on Amazon q and his repos and they were... something.
like making PRs that were him telling the ai that "we are going to utilize the x principle by z for this" and like 100s of lines of "principles" and stuff that obviously would just pollute the context and etc.
like huge amounts of commits but it was just all this and him trying to basically get magic working or something.
and to someone like me it was obvious that this was a futile effort but clearly he didn't seem to quite get it.
I think the problem is that people don't understand transformers, that they're basically huge datasets in a model form where it'll auto-generated based on queries from the context (your prompts and the models reponses)
so you basically are just getting mimicked responses
which can be helpful but I have this feeling that there's a fundamental limit, like a mathematical one where you can't get it really to do stuff unless you provide the solution itself in your prompt, that covers everything because otherwise it'd have to be in its training data (which it may have, for common stuff like boilerplate, hello world etc.)
but maybe I'm just missing something. maybe I don't get it
but I guess if you really wanna help him, I'd maybe play around with claude/gpt and see how it just plays along even if you pretend, like you're going along with a really stupid plan or something and how it'll just string you along
and then you could show him.
Orr.... you could ask management to buy more AI tools and make him head of AI and transition to being an AI-native company..
I don't know about 'delusion pattern', but a common problem is that the AI wants to be helpful, and the AI is sycophantic, so when you are going down an unproductive path the AI will continue to help you and reinforce whatever you are feeling. This can be very hard to notice if you are an optimistic person who is process oriented, because you can keep working on the process forever and the AI will keep telling you that it is useful. The problem with this of course is that you never know if you are actually creating anything useful or not without real human feedback. If he can't explain what he is doing adequately then ask him to have the AI do it and read that. You should figure out pretty quickly if it is bullshit or not. If he can't get the AI to tell you what it is he is doing, and he can't explain it in a way that makes sense, then alarm bells should ring.
I know it's not some amazing GDP-improving miracle, but in my personal life it's been incredibly rewarding.