Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thinking is quite fascinating though, I love reading it. Especially when it notices something must be wrong. It will probably be very helpful to refine answer for itself and other models.

It does add latency of course, but I still think that I could provide all AI needs of my company (industrial production) with a simple older off the shelf PC. My GPU is decently recent, but the smallest model of the series and otherwise the machine is a rusty bucket.

I didn't test it thoroughly yet, but I have some invoices where I need to extract info and it did a perfect job until now. But I don't think there is any LLM yet that can do that without someone checking the output.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: