Hacker Newsnew | past | comments | ask | show | jobs | submit | oxidant's commentslogin

Exactly the point I was going to make. Shipping something requires knowing how to ship it, monitor it, and fix it.

Writing code is the "easy" part and kind of always has been. No one triggers incidents from a PR that's been in review for too long.


LLM's can help with all of the above. Deployed an app with a backend, frontend, docker database and more with gitea on my NAS just yesterday. Have little knowledge about how it did it. Now I have a git remote to which I push and the app updates itself.

I guess it works well until you hit a stateful failure. My concern would be Day 2 operations—debugging a database issue or networking partition without a mental model of the underlying architecture seems pretty painful.

Electron ships a version of Chrome. Other frameworks like Tauri use the device's webview.


receive takes a timeout. A would crash/hit the timeout and deal with the problem.


Yes, agreed, hence rarely a problem in practice ;)


What do you mean, "creates a Conn variable out of whole cloth"?

Conn is just a pipeline of functions, the initial Conn struct is created at request time and passed through to each function in the pipeline.


I've never seen it write a file in plan mode either.


The behavior is configurable and the default is unbound.

https://www.erlang.org/doc/apps/erts/erl_cmd.html#%2Bsbt


Not if the items change relative position over time.


Counterpoint: Why should my println debugging get committed? They're not "important" for the final product but important for development.


you've obviously never encountered code which only works when println is added.


Schroedingers branching


The logic is once you are ready to commit you delete all of the debugging stuff. Otherwise you are committing an illusionary state of the repo that only existed by manipulating the stage.

I am a black kettle here as I frequently commit individual lines amongst a sea of changes, but I do appreciate the theoretical stance of jj.


Just to be clear, jj makes it really easy to carry this sort of thing as a separate patch, so while it may be "committed," that doesn't mean it has to go into what you send upstream.

(though for debug printfs in particular, the Right Thing is proper logging with log levels, but I myself love printf debugging and so sometimes don't do that either. Which is why carrying local patches is nice.)


There is no Right Thing here. Practicality beats purity. The product (a snapshot of the source tree) should do what it needs to do, but getting there is not the product. It can be if you want it to be, but there is no upside to that.


A tool like pre-commit really helps here: it'll run against the staged files, before committing. Your CI tool really ought to be testing the commit too, at which point having a clean commit locally isn't necessary for correctness, only for avoiding needing to re-do your work.

It's really important to catch bugs early, because it's a lot more expensive to fix them later -- even if only as late as in CI before merging.


I think most people would use a logging library (maybe at the "trace" level) at that point.


I do not agree it is something you can pick up in an hour. You have to learn what AI is good at, how different models code, how to prompt to get the results you want.

If anything, prompting well is akin to learning a new programming language. What words do you use to explain what you want to achieve? How do you reference files/sections so you don't waste context on meaningless things?

I've been using AI tools to code for the past year and a half (Github Copilot, Cursor, Claude Code, OpenAI APIs) and they all need slightly different things to be successful and they're all better at different things.

AI isn't a panacea, but it can be the right tool for the job.


I am also interested in how much of these skills are at the mercy of OpenAI ? Like IIRC 1 or 2 years ago there was an uproar of AI "artists" saying that their art is ruined because of model changes ( or maybe the system prompt changed ).

>I do not agree it is something you can pick up in an hour.

But it's also interesting that the industry is selling the opposite ( with AI anyone can code / write / draw / make music ).

>You have to learn what AI is good at.

More often than not I find it you need to learn what the AI is bad at, and this is not a fun experience.


Of course that's what the industry is selling because they want to make money. Yes, it's easy to create a proof of concept but once you get out of greenfield into 50-100k tokens needed in the context (reading multiple 500 line files, thinking, etc) the quality drops and you need to know how to focus the models to maintain the quality.

"Write me a server in Go" only gets you so far. What is the auth strategy, what endpoints do you need, do you need to integrate with a library or API, are there any security issues, how easy is the code to extend, how do you get it to follow existing patterns?

I find I need to think AND write more than I would if I was doing it myself because the feedback loop is longer. Like the article says, you have to review the code instead of having implicit knowledge of what was written.

That being said, it is faster for some tasks, like writing tests (if you have good examples) and doing basic scaffolding. It needs quite a bit of hand holding which is why I believe those with more experience get more value from AI code because they have a better bullshit meter.


> What is the auth strategy, what endpoints do you need, do you need to integrate with a library or API, are there any security issues, how easy is the code to extend, how do you get it to follow existing patterns?

That is software engineering realm, not using LLMs realm. You have to answer all of these questions even with traditional coding. Because they’re not coding questions, they’re software design questions. And before that, there were software analysis questions preceded by requirements gathering questions.

A lot of replies around the thread is conflating coding activities with the parent set of software engineering activities.


Agreed, but people sell "vibe coding" without acknowledging you need more than vibes.

LLMs can help answer the questions. However, they're not going to necessarily make the correct choices or implementation without significant input from the user.


OpenAI? They are far from the forefront here. No one is using their models for this.


You can substitute for whatever saas company of your choice.


Not OP but probably just cost.


This.

You can EASILY burn $20 a day doing little, and surely could top $50 a day.

It works fine, but the $100 I put in to test it out did not last very long even on Sonnet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: