That’s a great point — collaboration creates a natural “audience filter”, which reduces hoarding because you’re writing for someone, not just storing for yourself.
Kanban as a shared representation of “active work” also feels like the cleanest project-context signal: it’s explicit, lightweight, and already part of how the team coordinates.
Curious: in your experience with relay.md, what actually changes behavior the most?
1. social accountability (others will see messy notes)
2. having a shared kanban/project board
3. conventions/templates for how notes get promoted from “rough” to “reference”
Details in my HN profile/bio if you want more context on the “active projects as constraints” angle I’m exploring.
My cofounder actually has a bunch of skills with claude code that surface context into our daily notes (from our meeting notes, transcripts, crm, gmail, etc), but it's sort of on him to show that it is useful... so while he is still "hoarding" outside of the shared context it is with an eye toward delivering actual value inside of it.
Feels pretty different from the fauxductivity traps of solo second brain stuff.
That makes a lot of sense. Social accountability is a surprisingly powerful “noise filter” — once other people will see the mess, you naturally promote only what’s legible and useful.
And your cofounder’s setup is interesting because it’s not “PKM for PKM’s sake”, it’s context injection tied to an actual delivery surface (daily notes). That feels like the right wedge: the system earns its keep only if it helps someone ship something this week, not just accumulate.
Curious: what’s the single best signal that his context surfacing is “working”? Fewer missed follow-ups, faster re-entry into threads, or just less time spent searching across Gmail/CRM/transcripts?
We just have all meeting transcripts go to Obsidian (and get processed/mined) as well as our collaborative notes (our startup makes Obsidian collaborative) for our standups and then use claude code to summarize each day into the next.
We avoid the browser agents entirely, and thus avoid the scattered context. Claude code + markdown files in our vault.
It works remarkably well. I am bullish on unix tools and text files - dom parsing, rag, etc feels like solutions to unnecessary problems.
What do you mean by snapshots? There’s a “zmx history” command which will print whatever is stored in libghostty as plain text, or with ansi escape codes, or even html
I'm rendering a few dozen terminals in a website, and for all of the inactive ones i render and serve a jpg of the "current screen" of ansi escape codes from kitty.
I've found this to be a difficult thing to get. abduco doesn't have current state, and I dont want all of the complexity of tmux. I also don't want the entire scrollback history (until i click into a given terminal and connect with xterm).
IMHO there are a couple axis that are interesting in this space.
1. What do the tokens look like that you are you storing in the client? This could just be the secret (but encrypted), or you could design a whole granular authz system. It seems like tokenizer is the former and Formal is the latter. I think macaroons are an interesting choice here.
2. Is the MITM proxy transparent? Node, curl, etc allow you to specify a proxy as an environment variable, but if you're willing to mess with the certificate store than you can run arbitrary unmodified code. It seems like both Tokenizer and Formal are explicit proxies.
3. What proxy are you using, and where does it run? Depending on the authz scheme/token format you could run the proxy centrally, or locally as a "sidecar" for your dev container/sandbox.
I'm working on something similar called agent-creds [0]. I'm using Envoy as the transparent (MITM) proxy and macaroons for credentials.
The idea is that you can arbitrarily scope down credentials with macaroons, both in terms of scope (only certain endpoints) and time. This really limits the damage that an agent can do, but also means that if your credentials are leaked they are already expired within a few minutes. With macaroons you can design the authz scheme that *you* want for any arbitrary API.
I'm also working on a fuse filesystem to mount inside of the container that mints the tokens client-side with short expiry times.
1. You can issue your own tokens which means you can design your own authz in front of the upstream API token.
2. Macaroons can be attenuated locally.
So at the time that you decide you want to proxy an upstream API, you can add restrictions like endpoint path to your scheme.
Then, once you have that authz scheme in place, the developer (or agent) can attenuate permissions within that authz scheme for a particular issued macaroon.
I could grant my dev machine the ability to access e.g. /api/customers and /api/products. If i want to have claude write a script to add some metadata to my products, I might attenuate my token to /api/products only and put that in the env file for the script.
Now claude can do development on the endpoint, the token is useless if leaked, and Claude can't read my customer info.
Stripe actually does offer granular authz and short lived tokens, but the friction of minting them means that people don't scope tokens down as much.
I'm not sure I'm fully understanding you, but in my experience I have a few upstream APIs I want to use for internal tools (stripe, gmail, google cloud, anthropic, discord, my own pocketbase instance, redis) but there are a lot of different scripts/skills that need differing levels of credentials.
For example, If I want to write a skill that can pull subscription cancellations from today, research the cancellation reason, and then push a draft email to gmail, then ideally I'd have...
- a 5 minute read-only token for /subscriptions and /customers for stripe
- a 5 minute read-write token to push to gmail drafts
- a 5 minute read-only token to customer events in the last 24h
Claude understands these APIs well (or can research the docs) so it isn't a big lift to rebuild authz, and worst case you can do it by path prefix and method (GET, POST, etc) which works well for a lot of public APIs.
I feel like exposing the API capability is the easy part, and being able to get tight-fitting principle-of-least-privilege tokens is the hard part.
Yeah, it says so at the top of the README (though I suppose I could have put that in the comment too). I'm not building a product, just sharing a pattern for internal tooling.
Someone on another thread asked me to share it so I had claude rework it to use docker-compose and remove the references to how I run it in my internal network.
Writing your own skill is actually a lot better for context efficiency.
Your skill will be tuned to your use case over time, so if there's something that you do a lot you can hide most of the back-and-forth behind the python script / cli tool.
You can even improve the skill by saying "I want to be more token efficient, please review our chat logs for usage of our skill and factor out common operations into new functions".
If anything context waste/rot comes from documentation of features that other people need but you don't. The skill should be a sharp knife, not a multi-tool.
The main things i think are missing is (1) how much am i spending and (2) why isn't my sprite paused, and (3) how can i get my stuff out (it would be nice to be able to mount in either direction or else integrate with git/git worktrees).
I ended up using it (and enjoying yolo mode!) but then my sprites weren't pausing and i was worried about spending too much so i deleted them.
Working with other people gives you good habits against hording because you have a sense of the audience and what might be useful to them.
We also support the kanban plugin so that works well to track and share what we're working on.
reply