Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>There is still no mechanism in GenAI that enforces deductive constraints (and compositionality), ie., situations where when one output (, input) is obtained the search space for future outputs is necessarily constrained (and where such constraints compose).

I build these things for a living.

This is a solved problem.

You use multiple different types of models to supervise the worker models and force them to redo the work until you get a result that makes sense, or they fail and you give the resulting dump to a human to figure out what went wrong or ignore it.

Inference time compute is through the roof, but when you can save thousands of dollars by spending hundreds it's a no brainer.

Some people want AI to be as infallible as god before they'd consider it useful.



Isn't that approach more of a brute force than a problem solved?


Not sure why people keep falling into these mental traps.

Regardless of whether the system you're deriding is a "Chinese room", "stochastic parrot", "brute force" or whatever other dericive term-du-jour you want to use, if the system performs the required task, the only thing that actually matters is its cost to operate.

And if that cost is less than paying a human, that human, and society at large is in trouble.


Depends what problem you're trying to solve. Have we built something that can replace us completely in terms of reasoning? Not yet.

We have built something that can multiply a single persons productivity and in some constrained scenarios replace people entirely. Even if say your customer support bot is only 80% effective ( only 20% of interactions require humans to intervene ) that still means you can fire 80% of your support staff. And your bots will only get cheaper, faster, better while your humans require salary increases, hiring staff, can get sick, can't work 24/7 etc.

People so often forget that good is not the enemy of perfect.


It's hardly more brute force than using a trillion parameter model in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: