I find using git for my notes annoying, but it also works great with Syncthing if you have an always online peer. Before that I ran into conflicts that I guess would've been easier to resolve with git.
Basically boils down to FOMO, no one wants to be left behind. Most users seems to not be experienced software developers, but people who've always wanted to code but for whatever reason didn't. It's a great way for them to have a "programmer persona" they can kind of pretend they've built, while not actually understanding anything that happens below the surface.
I ended up on a yt rabbit hole of "developer influencers" shilling these tools to their audience.
It looks like a sub category of wantrepreneurs who pray on wannabe developers. They never worked on anything of importance but cosplay as domain experts and sell their expertise to people who don't know better. "how I launched 23 successful SaaS in q4 2025 with clawdbot" type of videos
Mine blew through $50 of OpenRouter credits in a single day. Not worth it.
Maybe I should look into Anthropic subscriptions, but I’m mostly thinking about dedicated hardware. A used Mac Studio M1 Ultra can has I impressive memory bandwidth…
Is it possible to set up this kind of workflow with the plug in that comes bundled with vs code, given that you have an enterprise github copilot account that includes Claude?
Subagents are a critical feature that GH Copilot still lacks. They allow your main agent to use another agent as a tool, meaning the main agent's context doesn't get nearly as polluted. Good read on the benefits of this pattern: https://jxnl.co/writing/2025/08/29/context-engineering-slash...
"20X the usage of pro" still sounds like quotas where the hammer could fall as it becomes less of an experiment for a limited number of power users..
The costs of self hosting some reasonable size models for a development group of various sizes is what I would want to know before investing in the skills to do a high usage style that might be being mostly bankrolled by investors for now.
My guess is: absolutely not, at least not for more than a few minutes. Subagents chew through tokens at a very high rate, and this system makes heavy use of subagents.
Well,of course you get who ever you elected, that's a trueism that holds for any method.
What method do you prefer?Trust in the market and chose the one with the highest price,
or, choose the one recommended by most, aka the popular choice or the elected?
You're offering two choices which prove the point that electing is a poor way to fill a post.
"popularity" does not imply competence. Popularity is easily gamed and bought. Given that unlimited business money can be spent on elections, it's mostly bought.
I'm not sure what you mean by market, or highest price, but I assume you mean the above?
The opposite of elections is appointment. Based on competence. So, for example, in my company I want job x done well, so I appoint a person based on their ability to do x.
Of course this assumes I want x done well. If I'm elected, and I want x done badly, then I can appoint someone based on other factors, like ideology or loyalty etc.
In the end this is relies allmost completely on proprietary AI as a service services, right? I think the exact services should be advertised as well to help understand the limitations of the device.
E.g. Can I use this device for any language or is it just English. Can it do translations?
Honestly, I've been away from the field for quite a long time so wouldn't be up to date. But, if you want kind of a good framing of the field, how it evolved and how it's different from other kinds of visualization (like scientific) maybe start here [0a][0b]
There used to be a lively research field for information visualization that studied current visualization techniques and proposed new ones to solve specific challenges -- I remember when treemaps were first introduced for example [1]. Large networks were a pretty big area of research at the time with all kinds of centrality clustering, and edge minimization techniques.
A few teams even tried various kind of hyperbolic representations [2,3] so that areas under local inspection were magnified under your cursor, and the rest of the hairball was pushed off to the edges of the display. But with big graphs you run into quite a few big problems very quickly like local vs. global visibility, layout challenges, etc.
Not specifically graph related, but the best critical thinker I know of in the space is probably Edward Tufte [4]. I have some problems with a few bits of his thinking, and other than sparklines his contributions are mostly in terms of critically challenging what should be represented, why, how, and methods of interaction, his critical analysis has stayed up there as some of the best. He has a book set that's a really great collection of his thoughts.
If you approach this problem critically, you end up at the inevitable conclusion that trying to globally visualize a massive graph in general is basically useless. Sure there are specific topologies that can be abstracted into easier to display graphs, but the general case is not conducive. It's also somewhat surprising at how small a graph can be before visualizing it gets out of hand -- maybe a few dozen nodes and edges.
I remember the U.S. DoE did some really pioneering studies in the field and produced some underappreciated experts like Thomas, Cook and Risch [5,6]. I like Risch's concepts around visualizations as formal metaphors of data. I think he's successful in defining the rigorous atomic components of visualization that you can build up from. Considering OP's request in view of Tufte and Risch, I think that they really need to think about the potential for different metaphors at different levels of detail (since they specify zooming in and out). There may not exist a single metaphor that can visualize certain data at every conceivable scope and detail!
One interesting artifact from all of this is that most of the research has long ago been captured and commoditized or made open source. There really isn't a market anymore for commercial visualization companies, or grant money for visualization research. D3.js [7] (and the derivatives) more or less took millions upon millions of dollars in R&D and commercial research and boiled it down into a free, open source, library that captured pretty much all of the major findings in one place. It's objectively better than anything that was on the market or in labs at the time I was in the space and it's free.
reply