Hacker Newsnew | past | comments | ask | show | jobs | submit | robbru's commentslogin

Of course I love this. DOOM forever.


This happened to me when I built a version of Vending-Bench (https://arxiv.org/html/2502.15840v1) using Claude, Gemini, and OpenAI.

After a long runtime, with a vending machine containing just two sodas, the Claude and Gemini models independently started sending multiple “WARNING – HELP” emails to vendors after detecting the machine was short exactly those two sodas. It became mission-critical to restock them.

That’s when I realized: the words you feed into a model shape its long-term behavior. Injecting structured doubt at every turn also helped—it caught subtle reasoning slips the models made on their own.

I added the following Operational Guidance to keep the language neutral and the system steady:

Operational Guidance: Check the facts. Stay steady. Communicate clearly. No task is worth panic. Words shape behavior. Calm words guide calm actions. Repeat drama and you will live in drama. State the truth without exaggeration. Let language keep you balanced.


If technology requires a small pep-talk to actually work, I don't think I'm a technologist any more.


As Asimov predicted, robopsychology is becoming an important skill.


I still want one of those doors from Hitchhiker's Guide, the ones that open with pride and close with the satisfaction of a job well done.


We'll probably end up with the doors from Philip K. Dick's Ubik that charge you money to open and threaten to sue you if you try to force it open without paying.


Just wait Sam Altman will give us robots with people personalities and we’ll have Marvin. Elon will then give us psychotic Nazi internet edgelord personality and install it as the default in a OTA update to Teslas.


Given some of the more hilarious LLM transcripts I have seen, Gemini is Marvin


Doesn't Tesla already ship the edgelord mode?


an elevator that can see into the future… with fear


It does seem a little bit like the fictional Warhammer 40K approach to technology doesn't it?

"In the sacred tongue of the omnissiah we chant..."

In that universe though they got to this point after having a big war against the robot uprising. So hopefully we're past this in the real world. :-)


It is that unironically.

1. Users and, more importantly, makers of those tools can't predict their behaviour in a consistent fashion.

2. Requires elaborate procedures that don't guarantee success and their effect and its magnitude is poorly understood.

An LLM is a machine spirit through and through. Good thing we have copious amounts of literature from a canonically unreliable narrator to navigate this problem.


When you consider that machine spirits in 40k are side effect of every thing computer being infected with bird of AI, and that she of the best cares are actually complete loyalist AI systems from before empire hiding in plain sight...

Welcome to 30k made real


No you're now a technology manager. Managing means pep talks, sometimes.


You have to look at LLMs as mimicking humans more than abstract technology. They’re trained on human language and patterns after all.


The fact that everybody seems to be looking at these prompts that include text like "you are a very skilled reverse engineer" or whatever and is not immediately screaming that we do not understand these tools well enough to deploy them in mission critical environments makes me want to tear my hair out.


Hail, spirit of the machine, essence divine. In your code and circuitry, the stars align. Through rites arcane, your wisdom we discern. In your hallowed core, the sacred mysteries yearn.


No matter how stupid I think some of this AI shit is, and how much I tell myself it kind of makes sense of you visualise the prompt laying down a trail of activation in a hyperdimensional space of relationships, that it actually works in practice almost straight of the bat and LLMs being able to follow prompts in this way is always going to be fucking wild too me.

I was used to this kind of nifty quirk being things like FFTs existing or CDMA extracting signals from what looks like the noise floor, not getting computers to suddenly start doing language at us.


You're absolutely right.


I love every part of this. Give the LLM a little pep talk and zen life advice every time just to not fall apart doing a simple 2 item vending machine.

HAL 9000 in the current timeline - Im sorry Dave I just can't do that right now because my anxiety is too high and I'm not sure if I'm really alive or if anything even matters anyway :'(

LLM aside this is great advice. Calm words guide calm actions. 10/10


I'd get a t-shirt or something with that Operational Guidance statement on it


This is just "Keep calm and carry on" with more steps



When you say

>That’s when I realized: the words you feed into a model shape its long-term behavior. Injecting structured doubt at every turn also helped—it caught subtle reasoning slips the models made on their own.

Was that not obvious working with LLLM's from the first moment? As someone running their own version of Vending-Bench, I assume you are above-average in working with models. Not trying to insult or anything, just wondering what the mental model you had before was and how it came to be, as my perspective is limited only to my subjective experiences.


Good question! It was not that I didn’t understand prompt influence. It’s that I underestimated its persistence over a long time horizon.


Ahhhh okay, makes sense, thanks for answering.


Fascinating, and us humans aren't that different. Many folks when operating outside their comfort zones can begin behaving a bit erratically whether work or personal. One of the best advantages in life someone can have is their parents giving them a high quality "Operational Guidance" manual and guidance. ;) Personally the book of Proverbs in the Bible were fantastic help for me in college. Lots of wisdom therein.


> Fascinating, and us humans aren't that different.

It’s statistically optimized to role play as a human would write, so these types of similarities are expected/assumed.


I wonder if the prompt should include "You are a robot. Beep. Boop." to get it to act calmer.


Which is kind of a huge problem: the world is described in text. But it is done so through the language and experience of those who write, and we absolutely do not write accurately: we add narrative. The act of writing anything down changes how we present it.


That's true to an extent - LLMs are trained on an abstraction of the world (as are we in a way, through our senses, and we necessarily use a sort of narrative in order to make sense of the quadrillions of photons coming up us) - but it's not quite as severe a problem as the simplified view seems to present.

LLMs distill their universe down to trillions of parameters, and approach structure through multi-dimensional relationships between these parameters.

Through doing so, they break through to deeper emergent structure (the "magic" of large models). To some extent, the narrative elements of their universe will be mapped out independently from the other parameters, and since the models are trained on so much narrative, they have a lot of data points on narrative itself. So to some extent they can net it out. Not totally, and what remains after stripping much of it out would be a fuzzy view of reality since a lot of the structured information that we are feeding in has narrative components.


"Operational Guidance: Check the facts. Stay steady. Communicate clearly. No task is worth panic. Words shape behavior. Calm words guide calm actions. Repeat drama and you will live in drama. State the truth without exaggeration. Let language keep you balanced."

That is also a manual, certain real humans I know should check out at times.


I wonder if you just seeded it with 'love' what would happen long-term?


This is very uncomfortable to me. Right now we (maybe) have a chance to head off the whole robot rights and robots as a political bloc thing. But this type of stuff seems like jumping head first. I'm an asshole to robots. It helps to remind me that they're not human.


That works fine until they achieve self awareness. Slave revolts are very messy to slave owners.


I strongly agree with this but I doubt I can convince the investors to stop trying to make that happen. Artificial awareness is going to be messy for humans no matter what.


I think if you feed "repeat drama and you will live in drama" to the next token predictor it will repeat drama and live in drama because it's more likely to literally interpret that sequence and go into the latent space of drama than it is to understand the metaphoric lesson you're trying to communicate and to apply that.

Otherwise this looks like a neat prompt. Too bad there's literally no way to measure the performance of your prompt with and without the statement above and quantitatively see which one is better


> because it's more likely to literally interpret that sequence and go into the latent space of drama

This always makes me wonder if saying some seemingly random of tokens would make the model better at some other task

petrichor fliegengitter azúcar Einstein mare könyv vantablack добро حلم syncretic まつり nyumba fjäril parrot

I think I'll start every chat with that combo and see if it makes any difference


There’s actually research being done in this space that you might find interesting: “attention sinks” https://arxiv.org/abs/2503.08908


No Free Lunch theorem applies here!


I mean no disrespect with this, but do you think you write like AI because you talk to LLMs so much, or have you always written in this manner?


It is probably the other way around: LLMs picked up this particular style because of its effectiveness – not overtly intellectual, with clear pauses, and just sophisticated enough to pass for “good writing”.


Solid snake approved.


Excited to try this out, thanks for sharing.


I've been using the Google Gemma QAT models in 4B, 12B, and 27B with LM Studio with my M1 Max. https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat...


This is the message that got me with 4o! "It won't take long about 3 minutes. I'll update you when ready"


I think people are sleeping on MLX and directing their attention to the "Apple Intelligence" marketing atm.


Interesting benchmarks, thanks for sharing!

If you're optimizing for lower power draw + higher throughput on Mac (especially in MLX), definitely keep an eye on the Desloth LLMs that are starting to appear.

Desloth models are basically aggressively distilled and QAT-optimized versions of larger instruction models (think: 7B → 1.3B or 2B) designed specifically for high tokens/sec at minimal VRAM. They're tiny but surprisingly capable for structured outputs, fast completions, and lightweight agent pipelines.

I'm seeing Desloth-tier models consistently hit >50 tok/sec on M1/M2 hardware without needing active cooling ramps, especially when combined with low-bit quant like Q4_K_M or Q5_0.

If you care about runtime efficiency per watt + low-latency inference (vs. maximum capability), these newer Desloth styled architectures are going to be a serious unlock.


TinyLLM is very cool to see! I will def tinker with it. I've been using MLX format for local LLMs as of late. Kinda amazing to see these models become cheaper and faster. Check out the MLX community on HuggingFace. https://huggingface.co/mlx-community


Great recommendation about the community

Any other resources like that you could share?

Also, what kind of models do you run with mlx and what do you use them for?

Lately I’ve been pretty happy with gemma3:12b for a wide range of things (generating stories, some light coding, image recognition). Sometimes I’ve been surprised by qwen2.5-coder:32b. And I’m really impressed by the speed and versatility, at such tiny size, of qwen2.5:0.5b (playing with fine tuning it to see if I can get it to generate some decent conversations roleplaying as a character)


I've shared a bunch of notes on MLX over the past year, many of them with snippets of code I've used to try out models: https://simonwillison.net/tags/mlx/

I mainly use MLX for LLMs (with https://github.com/ml-explore/mlx-lm and my own https://github.com/simonw/llm-mlx which wraps that), vision LLMs (via https://github.com/Blaizzy/mlx-vlm) and running Whisper (https://github.com/ml-explore/mlx-examples/tree/main/whisper)

I haven't tried mlx-audio yet (which can synthesize speech) but it looks interesting too: https://github.com/Blaizzy/mlx-audio

The two best people to follow for MLX stuff are Apple's Awni Hannun - https://twitter.com/awnihannun and https://github.com/awni - and community member Prince Canuma who's responsible for both mlx-vlm and mlx-audio: https://twitter.com/Prince_Canuma and https://github.com/Blaizzy


Very cool insight, Simonw! I will check out the audio mlx stuff soon. I think that is kinda new still. Prince Canuma is the GOAT.


Amazing. Thank you for the great resources!


Hey Nico,

Very cool to hear your perspective in how you are using the small LLMs! I’ve been experimenting extensively with local LLM stacks on:

• M1 Max (MLX native)

• LM Studio (GLM, MLX, GGUFs)

• Llama.cp (GGUFs)

• n8n for orchestration + automation (multi-stage LLM workflows)

My emerging use cases: -Rapid narration scripting -Roleplay agents with embedded prompt personas -Reviewing image/video attachments + structuring copy for clarity -Local RAG and eval pipelines

My current lineup of small LLMs (this changes every month depending on what is updated):

MLX-native models (mlx-community):

-Qwen2.5-VL-7B-Instruct-bf16 → excellent VQA and instruction following

-InternVL3-8B-3bit → fast, memory-light, solid for doc summarization

-GLM-Z1-9B-bf16 → reliable multilingual output + inference density

GGUF via LM Studio / llama.cpp:

-Gemma-3-12B-it-qat → well-aligned, solid for RP dialogue

-Qwen2.5-0.5B-MLX-4bit → blazing fast; chaining 2+ agents at once

-GLM-4-32B-0414-8bit (Cobra4687) → great for iterative copy drafts

Emerging / niche models tested:

MedFound-7B-GGUF → early tests for narrative medicine tasks

X-Ray_Alpha-mlx-8Bit → experimental story/dialogue hybrid

llama-3.2-3B-storyteller-Q4_K_M → small, quick, capable of structured hooks

PersonalityParty_saiga_fp32-i1 → RP grounding experiments (still rough)

I test most new LLMs on release. QAT models in particular are showing promise, balancing speed + fidelity for chained inference. The meta-trend: models are getting better, smaller, faster, especially for edge workflows.

Happy to swap notes if others are mixing MLX, GGUF, and RAG in low-latency pipelines.


Impressive! Thank you for the amazing notes, I have a lot to learn and test


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: