Hacker Newsnew | past | comments | ask | show | jobs | submit | jackzhuo's commentslogin

I think this works well if you’re already a practitioner in a domain and have repeated exposure to its problems.

For many indie developers, though, that assumption doesn’t always hold. We’re often not domain experts, and starting purely from personal pain can be misleading if the pain isn’t shared or frequent enough.

In my experience, the challenge isn’t “starting from yourself” vs “starting from keywords”, but figuring out how to get close enough to a real problem space to develop that kind of insight in the first place.


This describes my situation almost exactly, and it’s what triggered my reflection.

I built a “product” starting from keywords, but then realized I didn’t actually know where the users were, or how to talk to them. There was no obvious place for real feedback.

Starting from keywords let me ship something, but it also meant I was missing the professional context around the problem — the deeper understanding you only get by being inside the system where the frustration exists.

In hindsight, I think I optimized for building something, not for being close to the problem itself.


That resonates a lot with my experience.

I’ve found that discovering keywords is rarely the beginning of understanding a need. Knowing where people with that need actually are — and how they talk about the problem — seems much more important.

Even if you do find a viable keyword, SEO alone usually isn’t enough. You still have to talk to users, watch how they use (or don’t use) the solution, and iterate based on real feedback. Keywords feel more like a downstream artifact once you’re already deep in the context.


Funny you mention that—visual automation was actually in my original roadmap!

I held off on building it because I personally really cherish the manual process. The physical effort (even the clicking) helps me settle into the reading.

But I hear you. 100 clicks is a heavy lift for every session. I will likely add that 'visual auto-play' feature in a future update as a middle ground. Thanks for the feedback!


That 'underwhelmed' feeling is exactly what I was trying to avoid!

It makes sense that the character matching worked well—AI is incredible at pattern matching your past data/context. But Divination (Tarot/I Ching) requires Synchronicity and a sense of 'randomness' that feels earned.

When an LLM generates a reading, it's just predicting the next likely token, which flattens the magic. The manual ritual (shuffling cards or clicking stalks) restores that 'weight' that AI removes.


I like your take on this. It makes sense.

OP here.

I built this project because I am fascinated by the Yarrow Stalk method—a process that is complex and full of ritual.

While most apps are just "click and get answer," my implementation requires the user to click over 100 times to complete the ritual. I originally hesitated to add AI because AI demands speed, whereas this ritual represents slowness.

After a month of observation, the results have been surprising:

1. I see users completing these 100+ clicks every day, far exceeding my expectations.

2. Users have left comments on the site specifically asking me NOT to add AI.

3. I also received strong validation for this "sacred friction" strategy from the Indie Hackers community.

This has been really encouraging. It seems people are craving the "process" more than just the result.

I'd love to hear your thoughts on this.


Personally I am attracted to the simplicity of the I Ching in that I can do a reading for someone very quickly in the field without a lot of explaining (off brand as-a-fox) so I pack an assortment of coins in my tail (ahem… backpack) though I am still looking for a second pocket I Ching so I have something other than Wilhelm.

Love the coin approach for field readings!

regarding a 'second pocket' version: You absolutely have to check out Bradford Hatcher (https://hermetica.info/).

He is actually the biggest inspiration behind my development process. His work is incredibly deep, pragmatic, and distinct from the 'Christianized' tone you sometimes get with Wilhelm. He offers his massive 2-volume translation for free on his site.

I am actually considering digitizing his text as an alternative option on my site because his word-by-word matrix is just mind-blowing. Let me know if you find his style fits your 'field reading' vibe.


That PDF is amazing but I would need some way to cook it down to get interpretations quickly. I'd like to be able to function electronics-free but I do pack a phone and tablet most of the time and probably the tablet is going to be part of another kind of field demonstration I do.

I find it interesting the directions that popular works and I Ching scholarship are going: on one hand there is the market demand for a system of fortunetelling that is more positive and on the other hand that desire to unearth the real text behind that interpretation -- and notably the Chinese language has changed so much in that time that a literate Chinese reader is going to struggle with it.

The important thing for me is something that is easy to run but still has a good symbolic and mythic quality to it. I don't really know how to run tarot for instance and don't really want to learn.


(Most literate readers will have access to a plethora of modern annotations)

http://www.issplc.com/upload/pdf/2025/06/13A%20Textual%20Stu...

>颐六三. This line means “Opposing the principles of nourishment; steadfastness brings misfortune.

As implied by

https://news.ycombinator.com/item?id=46588660

So I Ching works well even for playing against the universe??

(When divination succeeds, credit the universe; when it fails, forgive the interpreter)

(Found that pdf while cursorily looking for a link between Tarocchi & Mongolian sheep bone divination, via either Mamluk or Marco Polo XD https://en.wikipedia.org/wiki/Shagai#:~:text=is%20as%20part%...

Italian Romanis had no need for the 3rd use ;)

https://en.wikipedia.org/wiki/Vowel_harmony

https://old.reddit.com/r/linguistics/comments/nrg00d/vowel_h... )


That quote—'When divination succeeds, credit the universe; when it fails, forgive the interpreter'—is absolute gold. I might have to put that on the website footer!

That 'hybrid' workflow (Physical Coins + Digital Lookup) is exactly what I had in mind for my 'Direct Interpret' mode.

I actually already have this live at: https://castiching.com/interpret

You can skip the digital casting entirely. Just throw your coins in the field, note the numbers, and plug them in there. It instantly pulls up the interpretation without the clicking.


Hi HN,

I am an indie dev and I built this tool to scratch my own itch.

I spent a lot of time trying to replicate styles I saw on X (Twitter), but I realized that simply copying prompts doesn't work anymore. The problem is that every AI model now speaks a different "language." Midjourney relies heavily on parameters like --sref and --stylize, while Flux prefers structured data, and DALL-E just wants simple natural English.

Existing image-to-text tools usually just describe what is in the image. They don't tell the model how to generate it.

So I built Prompt Lab to focus on model-specific tuning. When you upload an image, my tool analyzes the visual style and composition, then translates that data into the specific syntax for your target model.

It is definitely an MVP, so things might break. Please give it a spin and let me know what you think in the comments. I am actively working on it today, so if you have ideas on how to improve the prompt syntax or spot any bugs, just drop them here. I'm ready for your feedback.


I started indie hacking 6 months ago.

Thanks to AI tools, coding is now the easy part. I can build full-stack apps faster than ever.

But I hit a wall. I realized my problem isn't engineering anymore. It is finding users.

For 2026, I want to shift my focus. I need to stop worrying about "how to build" and learn "how to sell."

Marketing and distribution are now my top priorities.


Merry Christmas! This is a beautiful message, thank you for posting it.


Hi HN,

I built this because I wanted a simple tool to overlay grids on reference photos for drawing, but most top search results were either ad-ridden or required uploading images to a server.

GridMakers is different:

Privacy-First: It uses HTML5 Canvas to process everything locally in your browser. No image data is sent to my server.

Specialized Modes: I added presets for Portrait (A4 crop), Mural (10x10 scaling), etc.

Tech Stack: Built with Next.js and Tailwind.

It's free and I'm just trying to make a useful utility. Feedback welcome! Link: https://gridmakers.app


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: