You just need to find a smaller walled garden that can be tended, and not care deeply about having a massive audience and you can still find interesting conversation.
I've seen many Lemmy communities die because their creators abandoned then when they didn't grow fast into thousands of members. This fast growth fixation is so pernicious, if anything web forums and Reddit showed us, is that small communities are higher quality than big ones. Communities in the thousands require a lot of moderation effort to remain high quality.
Enjoy your small circle of internet strangers sharing a common interest, you don't need to become viral.
The gardens that need the most tending, and that will have the most impactful rewards for individuals and communities as a result of said tending, exist in meatspace. Stop searching for walled gardens on the internet and focus on whatever is around you wherever you are. Stop using "More social media but different this time!" as the solution to broken social media.
If anything the open internet seems worse. Every google search for some anodyne home maintenance task returns hundreds of AI-generated slop "guides" with affiliate links. YouTube is the last refuge for real information on this kind of thing. Coming across a human-written guide on the open web is increasingly rare.
I almost clarified that - Google Search is definitely part of that very centralized, corporate-owned web I was referring to. Like what you're describing is exactly what I'm talking about. But there are more and more niche obscure corners of the internet that you don't easily find, where good stuff is happening. People are still using IRC, Hotline, KDX, Gopher, and then there's newer stuff like Gemini ( https://geminiprotocol.net/ ), and potentially-invite-only close-knit communities on Mastodon and Lemmy. Oh yeah and then there's the alternatives to corporate stuff like Instagram -> PixelFed, YouTube -> PeerTube...
After the dot.com, there was the O-pocalypse that terrified me as a recent grad.
- Open source
- Outsourcing
- Offshoring
It was driving the labour cost of an engineer to zero I felt as a young man.
Then time passed, and I learnt that engineers aren't paid to code. Engineers are paid to solve problems for a business.
If you recall, the dot.com bust and 9/11 crashed finances for a few years. When the money printing gun went whir because "Deficits don't matter" Washington, then engineers were in demand again.
Right now we are in a weird situation where money is being printed and it is also tight. Most of it is going to the hardware and infrastructure layer, like the fiber optic bubble in the dot.com. Software will have its time in the sun again.
Take a look at the history of the power loom which automated weaving in the 19th century. The number of handloom weavers dropped two orders of magnitude after the power loom.
What happened last time is exactly what will never happen again because those were all specific one off, path dependent, moments in time.
I think what you are missing is that it might not be possible to stay in business if you can't use AI to solve problems.
Before the dot com bust, I was paid in college to file papers in file cabinets all day at an office. Ten years on from that, the paper was gone, the file cabinets were gone, obviously the paper filer job was gone and even that business that employed me as a paper filer was gone because they were a dinosaur who couldn't leverage technology well and were put out of business by competitors who could.
There is huge denial on this board that everything is going to be fine hand coding on the legacy systems of dinosaur companies. Seems more likely that if the company has so much technical systems debt that models aren't useful, those companies are not going to be competitive in their area of business.
some people discuss these dynamics as sheep versus goats. Social stability was more precious due to scarcity, while goat behavior included 40 armed men killing their rivals with swords (and better if the rivals do not have their own swords). Many, many parallels exist in mammals that live in groups. You might be surprised at the details of how some mammals actually behave in real life!
I find these throws of passionate despondency similar to the 1980s personal computing revolution. Oh dear. Giving mere mortals the power of computing?! How many people would abandon their computers or phones.
It’s not like it changes our industry’s overall flavour.
How many SaaS apps are excel spreadsheets made production grade?
It’s like every engineer forgets that humans have been building a Tower of Babel for 300000 years. And somehow there is always work to do.
People like vibe coding and will do more of it. Then make money fixing the problems the world will still have when you wake up in the morning.
I am not against vibe coding at all, I just don't think people understand how shaky the foundation is. Software wants to be modified. With enough modifications the disconnect between the code as it is imagined and the code in reality becomes too arduous of a distance to bridge.
The current solution is to simply reroll the whole project and let the LLM rebuild everything with new knowledge. This is fine until you have real data, users and processes built on top of your project.
Maybe you can get away with doing that for a while, but tech debt needs to be paid down one way or another. Either someone makes sense of the code, or you build so much natural language scaffolding to keep the ship afloat that you end up putting in more human effort than just having someone codify it.
We are definitely headed toward a future where we have lots of these Frankenstein projects in the wild, pulling down millions in ARR but teetering in the breeze. You can definitely do this, but "a codebase always pays its debts."
Yea, the more things change the more they stay the same. This latest AI hype cycle seems to be no different. Which I think will become more widely accepted over the next couple of years as creating deployable, production-ready, maintainable, sellable, profitable software remains difficult for all the reasons besides the hands-to-keyboard writing of code.
Or drop the price to $20 a year instead of $20 a month and and focus on small software updated infrequently. Software as a service has a dirty secret that it was more service than software. The companies became larded with payroll and most never had great gross margins.
Pretty much. A lot of software is just good enough already, just keep security updates going and fix occasional bug people complain for too long.
But that might require just firing some people because that amount of man-hours is not needed any more or moving them to make something new and no investor likes it
It’s a simple math problem. And it is also Conway’s law that says all software design follows the organization that built it—that is all software design is political.
A framework calls you. You call a library.
A framework constrains the program. A library expands the program.
It’s easier to write a library that is future proofed because it just needs to satisfy its contract.
It’s harder to write a framework because it imposes a contract on everything that depends on it.
Just like it is hard to write tort law without a lot of jurisprudence to build out experience and test cases, it is hard to write a framework from only one use case.
No one likes lawyers because they block you from doing what you want. This is the problem with frameworks.
However the government likes laws because they block you from doing what you want. Same with whomever is directing engineering that wants all other programmers to work in a consistent way.
Technically everything you have written is true. But the proliferation of frameworks is almost a self-reinforcing antipattern.
> No one likes lawyers because they block you from doing what you want.
Or even doing what you need to do.
Certainly, to the extent that a mini-framework is composed of more constraints piled on top of an extant bigger framework, mini-frameworks are, like swimming pools, attractive nuisances. "Hey, look, guys! This is so much simpler!"
> It’s harder to write a framework because it imposes a contract on everything that depends on it.
Judging by what people write and use, I'm not sure this is _exactly_ true. Sure, writing a _good_ framework or library is hard, but people accept piss-poor frameworks, and accept libraries that were designed to work in conjunction with a single framework.
> It’s easier to write a library that is future proofed because it just needs to satisfy its contract.
But the thing is that the library itself defines the contract, and it might be a piss-poor one for many applications.
There is some excellent code out there, and there is a lot of shitty code out there. I think the problem is social; too many people want to write code that is in charge. Now, maybe it's somewhat technical, in that they have used things that are in charge, and they were too big (leading to the mini-framework of the article) or they were otherwise not great, so this leads to yet another framework (cue standards xkcd cartoon) because they realize they need something in charge, but aren't happy with their current options.
And, of course, since the frameworks they know take a kitchen sink mentality, their new framework does as well. (Maybe it's a smaller sink, but everything needed is still shoved in there.) So there are yet more libraries that are tied to yet another framework.
Because writing good libraries that are completely framework independent _can_ be as challenging as writing a good framework. And when someone decides they need a new framework, they are focused on that, and making it work well, and since that drives their thought process, everything else the write gets shoved into the framework.
Thank you. I thought I was going crazy reading the article which doesn’t connect open and close parenthesis :: higher and lower precedence :: indent and outdent :: +1 and -1 and just flip it around to get the opposing polarity.
You’re right and wrong at the same time. A quantum superposition of validity.
The word thinking is going too much work in your argument, but arguably “assume it’s thinking” is not doing enough work.
The models do compute and can reduce entropy; however, they don’t match the way we presume things do this because we assume every intelligence is human or more accurately the same as our own mind.
To see the algorithm for what it is, you can make it work through a logical set of steps from input to output but it requires multiple passes. The models use a heuristic pattern matching approach to reasoning instead of a computational one like symbolic logic.
While the algorithms are computed, the virtual space the input is transformed to the output is not computational.
The models remain incredible and remarkable but they are incomplete.
Further there is a huge garbage in garbage out problem as often the input to the model lacks enough information to decide on the next transformation to the code base. That’s part of the illusion of conversationality that tricks us into thinking the algorithm is like a human.
AI has always had human reactions like this. Eliza was surprisingly effective, right?
It may be that average humans are not capable of interacting with an AI reliably because the illusion is overwhelming for instinctive reasons.
As engineers we should try to accurately assess and measure what is actually happening so we can predict and reason about how the models fit into systems.
reply