Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
krackers
on Jan 3, 2025
|
parent
|
context
|
favorite
| on:
Can LLMs write better code if you keep asking them...
Yeah we know positive reinforcement is better than negative one for humans, why wouldn't you use the same approach with LLMs. Also it's better for your own conscience.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: