Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looking forward to when we have compiler code written by llms. How hard would it be for a rogue llm provider/employee to channel the spirit of ken and inject little trusting trust attacks, if no one is reading the code any longer? (How likely is it to go off on such a tangent its own, after writing thousands of mind-numbingly boring lines of assembly code, given the existence of many such proof-of-concepts in its training set and the copious examples of temporary llm insanity?)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: