Hacker Newsnew | past | comments | ask | show | jobs | submit | filearts's commentslogin

Is it really AI slop if someone leverages AI to improve / transform their novel experiences and ideas into a rendition that they prefer?

I'm not suggesting whether or not the article is AI assisted. I'm wondering if the ease of calling someone's work "AI slop" is a step along the slippery slope towards trivializing this sort of drive-by hostility that can be toxic in a community.


You are right about the toxicity, I will edit my comment.

There's a difference between leveraging AI to proofread or improve parts of their writing and this - I feel like AI was overused here; gave the whole article that distinctive smell and significantly reduced its information density.


Given that the fix appears to be to look for own properties, the attack was likely to reference prototype level module properties or the gift-that-keeps-giving the that is __proto__.


I see this type of vulnerability all the time. Seen it in Java, Lua, JavaScript, Python and so on.

I think deserialization that relying on blacklists of properties is a dangerous game.

I think rolling your own object deserialization in a library that isn’t fully dedicated to deserialization is about as dangerous as writing your own encryption code.


Only if you're deserializing into objects with behavior.


What does data in a program do apart from eventually modify behavior?


not `__proto__` but likely `constructor`, if you access `({}).constructor` you'd get the Object constructor, then if you access `.constructor` on that you'd get the Function constructor

the one problem I haven't understood is how it manages to perform a second call afterwards, as only being able to call Function constructor doesn't really amount to much (still a serious problem though)


This comment from a dupe thread is worth considering: https://news.ycombinator.com/item?id=46137352


I think it is distasteful and disrespectful to call out an employee by name in this way, regardless of the merit of the rest of the OP's post.


well, it was distasteful of to them to close op's pr and apply the same patch with improper attribution, and then use ai to respond when they were asked about it


I agree with the parent post that it's distasteful.

There's no value in naming the employee. Whatever that employee did, if the company needed to figure out who it was, they can from the commit hashes, etc. But there's no value in the public knowing the employee's name.

Remember that if someone Googles this person for a newer job, it might show up. This is the sort of stuff that can disproportionately harm that person's ability to get a job in the future, even if they made a small mistake (they even apologized for it and was open about what caused it).

So no, it's completely unnecessary and irrelevant to the post.


> Remember that if someone Googles this person for a newer job, it might show up.

Not to sound too harsh, but this is a person who rudely let AI perform a task badly which should have been handled by just… merging/rebasing the PR after confirming it does what it should do, then couldn't be bothered to reply and instead let the robot handle it, and then refused to fix the mess they made (making the apology void).

That's three strikes.


What if it's some junior given a job beyond their abilities, and struggling manfully using whatever tools they have to hand. Is it worth publicly trashing their name? What does their name really add to this article?


A good lesson. If you as an employer look at this history, and handle it in the interview appropriately (what did you learn / do better now for example) you can figure out if they did.

I'm sure lots won't, but if that is you as an employer you're worth nothing.


What, understand, review, and accept a two-line patch is now a job beyond a junior's ability? Beyond ability of anyone who can call themselves a "programmer," much less a "maintainer"?

As a certified former newborn, I should tell that finding the tit as a newborn is way harder, and yet here we all are.

"Struggling manfully," my arse, I don't know if the bar can go any lower...


It discourages other from doing the same. It might not be much, but discussing various made up "what if ..." scenarios also doesn't add much. We can just stick to the facts.


I agree what occurred is quite egregious. But "use ai to talk to customers" and "play games with signed commits" sound much more like corporate policy than one employees mistake.


There also might be some corpo dystopian policy that is forcing them to use AI to do this task.


Why would the company need to figure it out from commit hashes? It's all public, in public GitHub repositories, with the person's personal GitHub account: https://github.com/auth0/nextjs-auth0/pull/2381


> Remember that if someone Googles this person for a newer job, it might show up.

So you'd rather the company get incomplete information about a candidate with hopes the candidate gets hired from a place of ignorance? If it's something the company would avoid hiring him for, then I don't find a problem with giving them the agency to make that decision for themselves.


> This is the sort of stuff that can disproportionately harm that person's ability to get a job in the future.

Isn't that beneficial in this case?


> Remember that if someone Googles this person for a newer job, it might show up.

That's the whole point; I sincerely hope it does. Why would anyone want to hire someone that delegates their core job to a slop generator?


How can it ever be disrespectful to publish truthful information about someone.

What does respect mean and how was it violated by this post?

I think you are far outside the mainstream of journalism norms and ethics and as such should bear the burden of explaining yourself further.

I think you're the one being disrespectful.


(op here)

On the one hand, you're right, it is distasteful, I completely agree. On the other hand, GitHub and Google and the public domain internet isn't everybody's CV that they can pick and choose which of their actions are publicised, tailored towards only their successes.


They maintain a public repo.


Yea. I can see what the parent is getting at. However the linked PR's contain the employee name. Their username is the same name mentioned in the article. So it would have been the same even if the author had just mentioned the username instead (which would be completely acceptable in all cases). I think junior employee or not, it's clear that they have the autonomy to check a PR for errors and fix it. So it's very much on them.


I don't think it is distasteful or disrespectful, he's just explaining what happened and why, and he's obviously unhappy with the whole ordeal.


Absolutely agree with this. There could be many, many reasons out of the named person's control, and that the author is not aware of, as to why this happened. It comes off as petty and arrogant and honestly the same attitude I expect from most people on hackernews. Overall its disappointing. Respect each others privacy.


While I think the blog post is dramatic, I don't think the author did anything wrong by mentioning the name of the person he feels wronged by. The information is public and it's the only way for that individual to be held accountable by anyone who comes across the article.


That's a bit of a naive perspective. There are plenty of situations and industries where access being down has an impact far beyond inconvenience. For example, access to medical files for treatment, allergies and surgery. Or access to financial services.


What I've started experimenting with and will continue to explore is to have project-specific MCP tools.

I add MCP tools to tighten the feedback loop. I want my Agent to be able to act autonomously but with a tight set of capabilities that don't often align with off-the-shelf tools. I don't want to YOLO but I also don't want to babysit it for non-value-added, risk-free prompts.

So, when I'm developing in go, I create `cmd/mcp` and configure a `go run ./cmd/mcp` MCP server for the Agent.

It helps that I'm quite invested in MCP and built github.com/ggoodman/mcp-server-go, which is one of the few (only?) MCP SDKs that let you scale horizontally over https while still supporting advanced features like elicitation and sampling. But for local tools, I can use the familiar and ergonomic stdio driver and have my Agent pump out the tools for me.


Horizontal scaling of remote MCP Servers is something the spec is sadly lacking any recognition around. If you've done work in this space, bravo. I've been using a message bus to decouple the HTTP servers from the MCP request handlers. I'm still evolving the solution, but it's been interesting so far.


This is the interface I landed on to make pluggable 'session hosts': https://github.com/ggoodman/mcp-server-go/blob/b8216cc1830ad...

It goes a tad beyond the spec minimum because I think it's valuable to be able to persist some small KV data with sessions and users.


In a previous professional life, I did financial modelling for a big 4 accounting firm. We had tooling that allowed us to visualize contiguous ranges of identical formulas (if you convert formulas to R1C1 addressing, similar formulas have the same representation). This allowed for overrides to stick out like a sore thumb.

I suspect similar tools could be made for Claude and other LLMs except that it wouldn't be plagued by the mind-numbing tedium of doing this sort of audit.


An idea might be to require a financially meaningful deposit to pursue an account recovery like this. The deposit would be forfeit if the identity verification failed.

Though now that I write this, it creates a perverse incentive for a company to collect deposits and deny account recovery.


It is fascinating how similar the prompt construction was to a phishing campaign in terms of characteristics.

  - Authority assertion
  - False urgency
  - Technical legitimacy
  - Security theater
Prompt injection here is like a phishing campaign against an entity with no consciousness or ability to stop and question through self-reflection.


Pretty similar in spirit to CSRF:

Both trick a privileged actor into doing something the user didn't intend using inputs the system trusts.

In this case, a malicious PDF that uses prompt-injection to get a Notion agent (which already has access to your workspace) to call an external web-tool and exfiltrate page content. Tjhis is simialr to CSRF's core idea - an attacker causes an authenticated principal to make a request - except here the "principal" is an autonomous agent with tool access rather than the browser carrying cookies.

Thus, same abuse-of-privilege pattern, just with different technical surface (prompt-injection + tool chaining vs. forged browser HTTP requests).


I'm fairly convinced that with the right training.. the ability of the LLM to be "skeptical" and resilient to these kinds of attacks will be pretty robust.

The current problem is that making the models resistant to "persona" injection is in opposition to much of how the models are also used conversationally. I think this is why you'll end up with hardened "agent" models and then more open conversational models.

I suppose it is also possible that the models can have an additional non-prompt context applied that sets expectations, but that requires new architecture for those inputs.


Isn't the whole problem that it's nigh-impossible to isolate context from input?


Yeah, ultimately the LLM is guess_what_could_come_next(document) in a loop with some I/O either doing something with the latest guess or else appending more content to the document from elsewhere.

Any distinctions inside the document involve the land of statistical patterns and weights, rather than hard auditable logic.


What does "pretty robust" mean, how do you even assess that? How often are you okay with your most sensitive information getting stolen and is everyone else going to be okay with their information being compromised once or twice a year, every time someone finds a reproducible jailbreak?


As is often the case, reality imitates satire. This reminds me of the "and then" scene from Dude, Where's my Car. https://youtu.be/iuDML4ADIvk


If you were willing to bring additional zod tooling or move to something like TypeBox (https://github.com/sinclairzx81/typebox), the json schema would be a direct derivation of the tools' input schemas in code.


The json-schema-to-ts npm package has a FromSchema type operator that converts the type of a json schema directly to the type of the values it describes. Zod and TypeBox are good options for users, but for the reference implementation I think a pure type solution would be better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: