The solution is to exercise information minimalism without caring about how you are perceived.
That being said, my brainstorming-style writing is not easy to follow and I want my ideas spread and not my wording. Therefore I often dump a raw brainstorming session into chatgpt and tell it to make it understandable.
That is not conclusive to being able to code. If you don't put yourself out there and your code for judgement, a lack of occupation is no rebuttal of ability.
But if you generalize this by asking, isn't code just a set of descriptions, have you written a tutorial and people used that? Then I would say, yes.
Of if you share some snippet and 1000 people see that and some upvote, are you able to code? Also refer to my other replies in this thread for clarification.
Thank you for your compassion, I really appreciate that.
Yet it is not so much about downplaying myself than rather thinking about whether what I did was useful, even necessary. Is there inherent intellectual value in fixing dependency issues? Or is the real value in the actual idea? In the perfect description of the problem? Basically the antithesis to the old-age statement of "Ideas are worth nothing, execution is what counts"?
Don't judge execution pre-2020 (or thereabouts) by execution in 2026. What you did may not be necessary if you were doing it today. But you were doing it then, not now, and then it was necessary in order to be able to do it at all.
Intersting, not my government though as I am in Germany. But are those huge deepseek models worth it? It seems that only proprietary models can match up.
On the other hand, we need to talk specifics. Measure up, how and regarding which benchmark.
I'm also on this track and I'm having this issue of seeing a wall of text and because I'm exposed to a lot of walls of text it is just too much information and I cannot comprehend the gist of it what I wanted to say. But on the same token, I think you are not really making it easier with this visual approach of mindmaps. But I don't really feel like sharing a better approach because your question is commercial.
I agree that mind maps aren’t a universal solution — they can add cognitive load if the structure doesn’t match how someone thinks. I’m not assuming visuals are “better,” only that for some people, seeing relationships helps reduce the wall-of-text problem.
The question is more about understanding limits as much as benefits. If you’ve found approaches that work better for you, even at a high level, I’d still be interested — not to commercialize them, but to understand where visualization breaks down.
I understand why you consider his question relevant. At the same time, it is worth making a clear distinction: OP does not formulate an alternative empirical explanation of physical reality, but rather a philosophical reflection on the consequences of the simulation assumption itself. In this context, the question of experimental testability is generally meaningful, but it misses the point here because it presupposes a scientific hypothesis that OP does not even propose. His objection would be justified if OP were to claim truth in the scientific sense — but he does not.
That being said, my brainstorming-style writing is not easy to follow and I want my ideas spread and not my wording. Therefore I often dump a raw brainstorming session into chatgpt and tell it to make it understandable.
Gets the job done, it's great help.
reply