The same way you do security for manually written code. Rigorously. But in this case, you can also have AI also do your code reviews and suggest/write unit tests. Or write out a spec and refine it. Or point it to OWASP and say, look at this codebase and make a plan to check for these OWASP top 10.
And have another AI review your unit tests and code. It's pretty amazing how much nuance they pick up. And just rinse and repeat until the AI can't find anything anymore (or you notice it going in circles with suggestions)
Yeah, some of these comments make it sound we had zero security issues pre-AI. I think the challenge is what you touched on, you have to tell the AI to handle it just like anything else you want as a requirement. I've use AI to 'vibe' code things and they have turned out pretty well. But, I absolutely leaned on my 20+ years of experience to 'work' with the AI to get what I wanted.
If you never put your personal side-project on the public web you had very few security issues resulting from your personal projects. We weren't talking about companies in this thread.
Are the frontend folks having such great results from LLMs that they're OK with "just let the LLM check for security too" for non-frontend-engineer created projects that get hosted publicly?
And have another AI review your unit tests and code. It's pretty amazing how much nuance they pick up. And just rinse and repeat until the AI can't find anything anymore (or you notice it going in circles with suggestions)