That’s a really interesting split: daily “historical log” for recall (3 years later), plus a separate work vault where docs are essentially a living project index. That’s a very sane way to avoid turning everything into one giant, overfit system.
Two things I’m curious about:
1.When you say you “mainly slip up when I write something in the wrong document” — is that mostly a friction/UX issue (too many similar places), or a missing “active project” surface that tells you where you are right now?
2.In grad school mode, what changes fastest for you: the set of active projects, or the kinds of inputs (papers/notes/emails/reading list) you’re trying to connect?
I’m exploring a goal-first workflow where you keep a small number of active targets/projects and let that drive re-entry and resurfacing (details in my HN profile/bio if you’re curious).
This is super helpful, thanks. The “search as a context switch” point is real, and the 80/15/5 split (Jira/Slack/Confluence) matches what I’ve seen too.
On the compliance point: totally fair. To clarify, I’m not assuming a company-wide deployment — I’m primarily thinking about a personal tool/workflow where you control what it can read (and for many people that means local-only or only non-sensitive sources). Your environment is a good reminder that “enterprise-ready” integrations are a different game.
If you could improve your personal workflow, what would save you the most time: pulling the right Confluence page when a Jira task is active, extracting a short “what’s the current state + next step” from scattered Slack threads, or something else?
More context on what I’m validating is in my HN profile/bio if you’re curious.
I will answer then check the bio :). Mom test and all that.
So with compliance even connecting a tool I download to an approved LLM is difficult. I need to get approval. If the tool is just a tool and doesn't use AI (and thus send out private data) it is easier. I think that is a problem they should solve i.e. give a safe LLM endpoint and let me choose my tools but alas.
I think what saves me time is difficult to say. Well organized docs OR an AI that can do that to AGI levels of intellegence. Fuzzy isn't helpful (I already have lots of fuzzy options). I need bulletproof correct info.
The pain isn't in the clicks to find info it is in understanding what I am reading and if it is relevant.
Something like this can be somewhat useful (not saying I would pay though!)
I would like to have 1000 or so vetted docs (can manually or AI vet). E.g. public API doc > internal API doc > Internal RFC > Some guys internal note they made public.
Take RFC and higher links and surface the ones I need for thw project. Chuck them in the Jira ticket.
That would be handy. But it isn't my biggest problem. So not sure how that squares up. With AI I can build this internally in a bespoke way (this is the general disruption AI has on any SaaS idea lol!) so not sure what sauce you would need.
The other AI problem is you are fighting the bitter lesson. By October CC might do this as a one sentence one shot.
Thanks for your support! And you’re right, what you described is very much the enterprise version of this problem (curated corpora + Jira/Confluence + compliance).
Just to clarify my scope: I’m starting with a personal, individual workflow (toC) where you control the sources end-to-end — local files, bookmarks, email, personal docs, etc. I’m not assuming company integrations, approval flows, or “drop into Jira” as the primary surface (those are a different product/compliance game).
That said, your “vetted docs + provenance + surface into the place you already work” framing is still useful in the personal setting too: a small trusted set of sources, always show citations/snippets, and a low-friction output surface (e.g. a task/project note you already use).
If you were applying the same idea personally, what would your “output surface” be — a todo app, calendar, a project doc, or just a weekly review note?
I appreciate the framing. I think a lot of “second brain” attempts fail the cost/benefit test unless there’s a very specific, narrow use case.
The “git docs dir / pkm fragment” idea is exactly the kind of wedge that feels realistic to me: a small, scoped corpus with clear boundaries, where an LLM can be useful as a collaborator (RAG, summarizing, filling gaps) without you committing your whole life to a system.
If you were to try a small fragment, what would you pick as the smallest useful scope: a single project docs folder, meeting notes for one team, or a personal “decisions log”?
> where an LLM can be useful [...] RAG, summarizing, filling gaps
I'd also emphasize onboarding. To smooth "oh joy, yet another ux", and simplify/speed/clarify "yes, that'd be nice - but no, we don't support it yet". One might imagine a pkm ui where you just start telling what you want, and it executes keyboard-visualizer style. And clearly characterizes what it can't do. Rather than 'intro-doc:"to do X, fiddle Y" -> human:"ok... I need to fiddle... Y?" -> ui:fiddle-Y-event"'. Keybindings as efficiency "expert mode", rather than onboarding barrier.
> If you were to try a small fragment, what would you pick as the smallest useful scope[?]
Last month's items which each prompted a "I really wish I had better tooling for this" search for something to try, were: A largely self-contained todo space, with lots of fiddly bits (life todos for another person). An exploration of a project idea - chats, notes, links, papers, sketches (MetaPaint: given stickers for ui elements, a custom paint app UI is simply another kids painting). A nudge on a backburnered area of interest - chats, papers, jupyter (towards a didactic perceptual color space, where the features you notice are real features of human vision, not mere model artifacts).
I actually looked at Obsidian for the first one, but I really wanted lightweight hierarchical task decomposition with implicit but overridable dependencies. Currently doing a one-big-markdown with frequent manual-ish passes. Tempted to fork an existing "mutant-markdown as executable literate-Julia" kludge.
But "smallest useful scope"... hmm. Of those, "project" and ""log"" seem for me plausible - "team" perhaps more a follow-on. But... I've been wanting better so often for so long, I'm mostly gated on there being a either a clearly satisficing task fit up front, or some hope of a plateau escape. A slog of extending/kludging/adapting in hope of incremental gain has diminished appeal. Hmm, a thought: having just looked at several pkm docs, current onboarding seems more "start with the basics, and build out", rather than a clear "with a fully tricked out set of extensions, here is the envelope what you can/cannot/by-workaround manage with us today".
I'd really like a plateau escape. Outliners, Hypertext, tiny manual force-directed graphs... it's been a fun half century, but I'd like to move on now. I wonder what an old tidlywiki-style self-hosted pkm bootstrap might look like, in a time of git, whisper, discussion and coding LLMs, extensive community code repos, and graph libraries less hopelessly behind graph HCI research. 1080p HMDs, head-tracked 4K, and gestures. Hmm...
This is a super interesting (and refreshingly candid) direction. You’re basically building a local-first “life event ledger” with delayed sync.
Actually, I'm not an expert in this area, but I feel the challenge may not lie in data collection itself, but rather in ensuring the data remains secure, usable, and easy to maintain over many years.
A custom binary format can work, but it could be a long-term maintenance commitment (schema evolution, tooling, corruption recovery).
That makes sense — treating it as a personal search engine is a real, high-ROI use case. Full-text search covers the “I remember the idea but not where I saw it” problem really well.
Out of curiosity, what’s the bigger win for you: full-text search itself, or the tagging/metadata layer that helps narrow results when your memory is fuzzy? And do you mostly search by keywords, or by “context” (project/topic you’re working on)?
I’m validating a similar retrieval-first angle (summarized in my HN profile/bio if you want to compare notes).
Given that your comment is AI generated I don't know if you're actually interested or just want to plug your product, though I'll assume good faith and answer the question
I don't manually tag any entries - the automatic AI tags just add extra keywords I can search for that are not included in the original article text. So I mostly search by keywords, yes. Not sure what the difference is between "keywords" and "topic you're working on".
See also https://mymind.com, which takes the AI tagging even further. Potentially similar to what you're building (although, again, your landing page contains a lot of AI generated metaphors and nothing that explains what your product actually does)
As mentioned in my current Ask HN post, the product is indeed not yet finalized or launched. The envisioned product, Concerns, acts as a bridge, linking your current concerns and target tasks to your knowledge base/resource repository and action list (which could be to-do lists, calendars, etc.), forming an organic closed-loop system. Using target/active projects within Concerns as triggers, it retrieves relevant information from your resource repository. It proactively pushes solutions, plans, and suggestions for users to filter. Selected items then enter the user's action list. The goal is to enhance efficiency and effectiveness in a lightweight manner, without altering existing habits for using resource repositories or action lists.
This idea stems from my own pain points, and I genuinely hope that while solving my own issues, it might also address broader needs.
Regarding your response: It's interesting that AI tagging primarily aids by adding extra searchable keywords. However, I'd prefer broader content and semantic search/matching capabilities without relying solely on tags—though tagging remains a viable implementation approach.
Thanks for the mymind reference—I'll explore it.
PS. Did you perceive an AI-driven approach because I used translation software?
> PS. Did you perceive an AI-driven approach because I used translation software?
Are you using an LLM-based translation tool? I perceived your comment as AI mostly based on the first paragraph:
> That makes sense — treating it as a personal search engine is a real, high-ROI use case. Full-text search covers the “I remember the idea but not where I saw it” problem really well.
This is very much an LLM-style "That's a great idea!" type response. I usually don't even notice when something is LLM generated, but this part really stood out even to me.
It seems like most software now integrates LLM... Can't escape it, sigh.
The mymind you recommended has made significant strides toward tackling “information overload” and “organization fatigue.” However, I feel it remains fundamentally a storage solution—reducing the effort of organizing and facilitating retrieval—but doesn't directly align with my target.
It also reminds me of another product, youmind (https://youmind.com/), though it's primarily geared toward creation rather than PKM. Perhaps I could pay to try its advanced AI features.
That’s a brutally honest metric, and I think it’s common: most “save for later” is really “offload for now”. The fact you only resurfaced twice doesn’t mean the tool failed — it might mean the capture filter is too loose, or resurfacing is missing a good trigger.
Do you think the better fix is stronger filtering at capture time (keep less), or a lightweight resurfacing habit (e.g. a weekly 10-minute review / 1–2 items per day digest) so more of it gets a fair second look?
I’m exploring this exact “offload vs resurfacing” problem (more context in my HN profile/bio if you’re curious).
That’s a great breakdown — thank you. The “root note as a homepage” + three lists feels like the simplest re-entry surface.
Quick question: do you keep those lists purely time-based (recently updated/created), or do you also include any “active project” signal (e.g. notes linked from a project hub / kanban) so the homepage reflects what you’re actually working on rather than what was last touched?
Totally agree — a recurring ritual beats any tool. Rewriting/distilling is basically how notes turn into something you can actually re-enter later.
One nuance I’m exploring is making that weekly “fix notes” time the primary UX: during the review, help you pick the few items worth distilling, link them to a small set of active projects, and extract 1–2 concrete next steps. Outside that window, stay quiet so it doesn’t become another inbox.
What cadence has actually stuck for you in practice: a short daily pass, or a deeper weekly review?
Summaries are a great first step — they reduce the friction of “re-reading”, and they can help you decide what’s actually worth keeping.
The question I’m curious about is what happens after the summary: do you want it to end as “good to know”, or do you sometimes want it to turn into something concrete (a bookmark tagged to an active topic, a short brief, or a next action)?
If you’re open to one more detail: how do you consume the summaries today — a daily digest you pull when you have time, or do you generate them only when you’re searching for something?
Two things I’m curious about:
1.When you say you “mainly slip up when I write something in the wrong document” — is that mostly a friction/UX issue (too many similar places), or a missing “active project” surface that tells you where you are right now?
2.In grad school mode, what changes fastest for you: the set of active projects, or the kinds of inputs (papers/notes/emails/reading list) you’re trying to connect?
I’m exploring a goal-first workflow where you keep a small number of active targets/projects and let that drive re-entry and resurfacing (details in my HN profile/bio if you’re curious).