Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Security features seem like the ONE thing you wouldn't want an LLM generating/hallucinating ...


You wouldn't just blindly implement what the LLM generates. You would use it more as an efficient way to go through all the necessary docs. From there you'd sanity check the recommendations and _then_ implement a solution, applying your judgment along the way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: