Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> An "AI Agent" replacing an employee requires intentional behaviour: the AI must act according to business goals, act reliably using causal knowledge of the environment, reason deductively over such knowledge, and formulate provisional beliefs probabilistically.

I mean this in the least cynical way possible: the majority of human employees today do not act this way.

> The vast majority of our work is already automated to the point where most non-manual workers are paid for the formulation of problems (with people), social alignment in their solutions, ownership of decision-making / risk, action under risk, and so on.

This simply isn't true. Take any law firm today for example - for every person doing the social alignment, ownership and risk-taking, there is an army of associates taking notes, retrieving previous notes and filling out boilerplate.

That kind of work is what AI is aiming to replace, and it forms the bulk of employment in the global West today.



The illusion you appeal to is so common, it ought have a name. I guess something like the "reptition-automaton illusion", I don't know or perhaps "the alienation of the mind in creative labour" . Here's a rough definition: the mistaken belief that producing repetitive products employ only repeatable actions (, skills, etc.).

A clear case: acting. An actor reads from a script, the script is pregiven. Presumably nothing could be more repetitive: each rehearsal is a repeat of the same words. And yet Antony Hopkins isn't your local high schooler, and the former paid millions and the latter not.

That paralegals work from the same template contracts, and produce very similar looking ones, tells you about the nature of what's being produced: that contracts are similar, work from templates, easily repeated, and so on. It really tells you nothing about the work (only under an assumption we could call "zero creativity"). (Consider if that if law firms were really paid for their outputs qua repeats, then they'd be running on near 0% profit margins.)

If you ask law firms how much they're employning GenAI here you'll hear the same ("we tried it, and it didnt work; we dont need our templates repeated with variation they need to be exact, and filled in with specific details from clients, etc."). And I know this because I've spoken to partners at major law firms on this matter.

The role of human beings in much work today is as I've described. The job of the paralegal is already very automated: templates for the vast majority of their contract work exist, and are in regular use. What's left over is very fine-grained, but very high-value, specialisation of these templates to the given case -- employing the seeking-out of information from partners/clients/etc., and so on.

The great fear amongst people subject to this "automaton" illusion is that they are paid for their output, and since their output is (in some sense) repeated and repeatable, they can be automated away. But these "outputs" were in almost all cases nighmarish liabilities: code, contracts, texts, and so on. They aren't paid to produce these awful liabilities, they are paid to manage them effectively in a novel business environment.

Eg., programmers aren't paid for code, they're paid to formalise novel business problems in ways that machines can automate. Non-novel solutions are called "libraries", and you can already buy them. If half of the formalisation of the business problem becomes 'formulating a prompt' you havent changed the reason the business employs the programmer


This is probably the best description of the central issue I've seen. I know even in my own work, which is a very narrow domain in software, I've found it troublesome to automate myself. Not because the code I write is unique or all that difficult, but because the starting conditions I begin with depend on a long history of knowledge that I've built up, an understanding of the business I'm part of, and an understanding of user behavior when they encounter what I've built.

In other words, I can form a prompt that often one-shots the code solution. The hard part is not the code, it's forming that prompt! The prompt often includes a recommendation on an approach that comes from experience, references to other code that has done something similar, and so on. I'm not going to stop trying to automate myself, but it's going to be a lot harder than anyone realized when LLMs first came out.


It is also about responsibility. If something is wrong you can blame the human. Blaming the AI is not acceptable.


Aren't we already doing that with self driving cars?

I have yet to see any serious consequences from their epic fails.


I cant imagine we will ever objectively compare the two. Maybe in 100 years someone will blame a crash on human drivers.


You're correct, but what can be affected is the number of workers. Considering the example of the acting career, in the old times every major city would have a number of actors and playhouses. Cinema and TV destroyed this need and the number of jobs for local actors is minuscule now.


This comment has communicated what I've been struggling to for months now, and in a much more succinct and clear way. Well done :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: