Hacker Newsnew | past | comments | ask | show | jobs | submit | jamilton's commentslogin

If AI is at the point where it is exactly as capable of your average junior 3D professional in 10 years, it will probably have automated a ton (double digit percentage?) of current jobs such that nothing is safe. There's a lot of complexity, it's fairly long time horizon, it's very visually detailed, it's creative and subjective, and there's not a lot of easily accessible high quality training data.

It's like 2D art with more complexity and less training data. Non-AI 2D art and animation tools haven't been made irrelevant yet, and don't look like they will be soon.


Not quite. The junior produced also source filed that a senior can enhance. AI gives you the end result that can’t be as easy tinkered with.


Yeah, LLMs used to not be up to par for new Project Euler problems, but GPT-5 was able to do a few of the recent ones which I tried a few weeks ago.


Yeah. I wonder why that is - is it because it highlights a conflict between our actions and values? If left unexamined, it's a non-issue, so having it spelled out feels like a problem being created?


I like your description.

I think sometimes this is when we find our way to the middle of two relatively simple drives: "be an orthodox group member/ avoid being a social outcast" and "avoid the discomfort of cognitive dissonance / admitting hypocrisy".

If there aren't immediate consequences for inaction (especially if there ARE costs and/or social consequences for action) were very good at convincing ourselves to ignore it (or tell ourselves we will EVENTUALLY deal with it but just not right now)


I would much rather assume the people I'm interacting with are honest and conveying their real feelings, vs playing some (probably) Machiavellian game with N levels of dishonesty and manipulation from what could easily be a malevolent person at the core. At least that tends to be the assumption when you pick up on a lack of authenticity in this way.

When you have a real indication of dealing with a master manipulator, it's very understandable that you should use an abundance of caution. That's probably an instinct in us at this level.

Of course everyone is at least a little aware that they're putting on a bit of a ruse with their public persona, but that needs to be tethered to some level of authenticity or you'll just be sending out Patrick Bateman vibes.


This strikes me as a glass-half-empty interpretation. Why is the stuff from the blog post necessarily machiavellian and manipulative? I didn't read any of that into that blog post. Rather, it was about how to create win-win situations where the people involved genuinely enjoy each others company. No need for bad intentions here.


    > When you have a real indication of dealing with a master manipulator
This statement seems like a paradox. Forgive my "No True Scotsman" example. If the person is such a "master manipulator" what indications do you have? The social normies will miss them, or will think they are the ones making the suggestions/decisions. This is the hallmark of master craft sales people.


Wouldn't you think it is more important what the goal for the other person is? If their goal is to enrich and make both of your lives better, does it matter whether they consicously use social techniques or have natural automatic ability to do so?

It is also autism vs psychopathy. Patrick Bateman is nowhere close to someone autistic trying to learn those socially successful behaviours. Patrick Bateman is a terrible human being not because they are inauthentic, he is a terrible human being because of the acts he did and wanted to do.


I know this has been said many times before, but I wonder why this is such a common outcome. Maybe from negative outcomes being underrepresented in the training data? Maybe that plus being something slightly niche and complex?

The screenshot method not working is unsurprising to me, VLLMs visual reasoning is very bad with details because they (as far as I understand) do not really have access to those details, just the image embedding and maybe an OCR'd transcript.


I’m sure miegakure is going to come out any day now.


Tangent but now I’m curious about the bot, is there a write-up anywhere? How did it work? If someone says “hi”, what did the bot respond and what did the human do? I’m picturing ELIZA with templates with blanks a human could fill in with relevant details when necessary.


Basically Levenshtein on previous responses minus noise words. So if the response was 'close enough' then the bot would use a previously given answer, if it was too distant the human-in-the-loop would get pinged with the previous 5 interactions as context to provide a new answer.

Because the answers were structured as a tree every ply would only go down in the tree which elegantly avoided the bot getting 'stuck in a loop'.

The - for me at the time amazing, though linguists would have thought it trivial - insight was how incredibly repetitive human interaction is.


The article links to the case of Adam Raine, a depressed teenager who confided in ChatGPT for months and committed suicide. The parents blame ChatGPT. Some of the quotes definitely sound like encouraging suicide to me. It’s tough to evaluate the counterfactual though. Article with more detail: https://www.npr.org/sections/shots-health-news/2025/09/19/nx...


https://www.figure.ai/figure says 5 hour battery life. No mention of charge rate.


It's cool, but it's total uncanny valley for me, and I haven't gotten that from real robots before. Something about the movement in particular is odd.


At that point just have a messaging standard that allows in-line small images.


Yeah, basically. I'm thinking of something like stickers on messaging apps but without the oversized graphics.


It's only a matter of time until Unicode adopts embedded SVG.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: