Brandon Sanderson often says in interviews that "laying bricks" is the best job a writer can have. He also says being a software engineer is particularly bad job for writers because you cannot do it on autopilot. I can confirm.
Back then, all jobs moved at a much slower pace. There was a lot more off time during work hours.
The number of submissions to high energy physics category on arXiv is double this year compared to the historical average. The author hypothesizes the increase is due to papers being written by LLMs.
> The surge of AI, large language models, and generated art begs fascinating questions. The industry’s progress so far is enough to force us to explore what art is and why we make it. Brandon Sanderson explores the rise of AI art, the importance of the artistic process, and why he rebels against this new technological and artistic frontier.
Do watch the video as it makes a compelling argument against this exact kind of thing. From a product design perspective, you're asking people to read a bunch of slop and organize it into slop piles. What's the point of that? Honestly it seems like a huge waste of everyone's time.
I think there's interesting work to be built on this data beyond just generating and sorting slop. I didn't build this because I enjoy having people read bad fiction. I built it because existing benchmarks for creative writing are genuinely bad and often measure the wrong things. The goal isn't to ask users to read low-quality output for its own sake. It's to collect real reader-side signal for a category where automated evaluation has repeatedly failed.
More broadly, crowdsourced data where human inputs are fundamentally diverse lets us study problems that static benchmarks can't touch. The recent "Artificial Hivemind" paper (Jiang et al., NeurIPS 2025 Best Paper) showed that LLMs exhibit striking mode collapse on open-ended tasks, both within models and across model families, and that current reward models are poorly calibrated to diverse human preferences. Fiction at scale is exactly the kind of data you need to diagnose and measure this. You can see where models converge on the same tropes, whether "creative" behavior actually persists or collapses into the same patterns, and how novelty degrades over time. That signal matters well beyond fiction, including domains like scientific research where convergence versus originality really matters.
Someone is using it to write a memoir. Which I find incredibly ironic, since the goal of a memoir is self-reflection, and they're outsourcing their introspection to a LLM. It says their inspirations are Dostoyevsky and Proust.
This is consistent with my own observations of LLM-generated code increasing the burden on reviewers. You either review the code carefully, putting more effort into it than the actual original author. Or you approve it without careful review. I feel like the latter is becoming more common. This is basically creating tech debt that will only be realized later by future maintainers
It’s a prisoner’s dilemma, too. The person who commits to giving code review its due diligence is going to end up spending an inordinate amount of time reviewing others’ changes, leaving less time to completing their own assignments. And they’re likely to request a lot of changes, too. That’s socially untenable for most people, especially ones who clearly aren’t completing as many story points as their teammates. Next thing you know your manager is giving you less than stellar performance reviews, and the AI slopcoders on your team are getting the promotions and being put into position to influence how team norms and culture evolve over time.
The worst part is, this isn’t me speculatively catastrophizing. I’m just observing how my own organization’s culture has changed over the past couple of years.
It’s hitting the less senior team members hardest, too. They are generally less skilled at reading code and therefore less able to keep up with the rapid growth in code volume. They are also more likely to get assigned the (ever growing volume of) defect tickets so the more senior members can keep on vibecoding their way to glory.
As someone who reads newsletters for fun, my suggestion would be to subscribe to less newsletters, and focus on the ones you enjoy reading the most. The product design seems to me motivated by FOMO and wanting to get all the info from all the newsletters. Think of newsletters less as a bucket you fill up and need to empty, and more as a river you can dip in and out of. Don't stress out about staying on top of all the hype. (I know you asked for product feedback but you got philosophical feedback instead.)
I love the 'river vs bucket' metaphor. You hit the nail on the head - FOMO was 100% the motivation here.
Ideally, I would just unsubscribe, but in my field (and as a student), I often fear missing a specific signal in the noise. My goal with this agent is actually to help me treat it more like a river: if the AI tells me nothing critical happened, I can skip reading without anxiety. Thanks for the perspective!
Why AI-generated novels will always feel like something's missing.
This post was inspired, in part, by a post last week of someone asking if they should start a blog in the age of LLMs. I hope my main argument here helps encourage people to pursue their own writing!
If you have any questions for me, you can drop a comment here or on the article. Thanks for reading!
My experience hasn't been LLMs automate coding, just speeds it up. It's like I know what I want the solution to be and I'll describe it to the LLM, usually for specific code blocks at a time, and then build it up block-by-block. When I read hacker news people are talking like it's doing much more than that. It doesn't feel like an automation tool to me at all. It just helps me do what I was gonna do anyways, but without having to look up library function calls and language specific syntax
> My experience hasn't been LLMs automate coding, just speeds it up.
This is how basically everyone I know actually uses LLMs.
The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
As a professional programmer, I think both are useful in different scenarios.
You're maintaining a large, professional codebase? You definitely shouldn't be vibe coding. The fact that some people are is a genuine problem. You want a simple app that you and your friends will use for a few weeks and throw away? Sure, you can probably vibe code something in 2 hours instead of paying for a SaaS. Both have their place.
I’m seeing vibe coding redefine what the product manager is doing. Specifically, adding solution execution to its existing strategy and decision making responsibilities. The PM puts solutions in front of a customer and sees what sticks, then hands over the concept to engineering to bake into the larger code base. The primary change here is no longer relying on interviews and research to make product decisions that engineering spends months building only to have flop when it hits market. The PM is being required to build and test dozens of solutions before anything makes its way to engineering resources. How engineering builds the overall solution is still under their control but the fit is validated before it hits their desk.
I think the next step is to realize that this kind of product manager role is one that more "engineers" should be willing to take on themselves. It's pretty clear why user interviews and research and product requirement docs are not obviously within the wheelhouse of technical people, but building lots of prototypes and getting feedback is a much better fit!
I think the problem starts with the name. I've been coding with LLMs for the past few months but most of it is far from "vibed", I am constantly reviewing the output and guiding it in the right direction, it's more like a turbo charged code editor than a "junior developer", imo.
> The whole story about vibecoding and LLMs replacing engineers has become a huge distraction
Because the first thing that comes from individual speed up is not engineers making more money but there being less engineers, How much less is the question? Would they be satisfied with 10%, 50% or may be 99%?
Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
If we doubled agricultural productivity globally we'd need to have fewer farmers because there's no way we can all eat twice as much food. But we can absolutely consume twice as much CSS, try to play call of duty on our smart fridge or use a new SaaS to pay our taxes.
Oh but we can absolutely let all that food go to waste! In many places unbelievable amounts of food go to waste.
Actually, most software either is garbage or goes to waste at some point too. Maybe that's too negative. Maybe one could call it rot or becoming obsolete or obscure.
I have been around for “the past few decades”. Then you saw the rapid growth of the internet, mobile and BigTech. Just from the law of large numbers, BigTech isn’t going to grow exponentially like it did post 2010.
It’s copium to think that with the combination of AI and oversupply of “good enough” developers, that it won’t be harder for developers to get jobs. We are seeing it now.
It wasn’t this bad after the dot com bust. Then if you were just an ordinary enterprise developer working “in the enterprise” in a 2nd tier city (raises hand), jobs were plentiful.
I think the better way to think of this is whether it will be harder for people who are good at using AI tools to accomplish things with computers to get jobs. Maybe, but I don't think so. I think this skill set will be useful in every line of work.
That doesn’t solve the problem. It’s easy enough to be “good enough” at AI tools just like it’s easy enough to be a decent enterprise CRUD full stack/back end/mobile developer. It will still be hard to stand out from the crowd.
I saw this coming on the enterprise dev side where most people work back in 2015. Not AI of course, but the commoditization of development.
I started moving closer to the “business”, got experience in leading projects, soft skills, requirements gathering, AWS architecture etc.
I’m not saying the answer is to “learn cloud”. I am saying that it’s important to learn people skills and be the person trusted with strategy and don’t just be a code monkey pulling well defined tickets off the board.
My point is: I don't think there will be way more jobs for "AI developers", I think there will be plenty of jobs for people who are employed in an industry and adept with using AI tools to be effective at their job. These people would not be differentiating themselves from other "AI developers", but from other people who do their role in whatever industry they are in, but who aren't as adept with these tools.
> Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.
I see this fallacy all the time but I don't know if there is a name for it.
I mean, we make used fun of MBAs for saying the same thing, but now we should be more receptive to the "Line Always Goes Up" argument?
As counter-anecdata, I have a family members that are growing businesses from scratch and they constantly talk to me about problems they want to solve with software. Administrative problems, product problems, market research problems, you name it. I'm sure they have other problems they don't talk to me about where they're not looking for software solutions, but the list of places they want software to automate things is never-ending.
There consumer internet is mostly cropped up by white collar people buying stuff online and clicking on ads. Once the cutting starts, the whole internet economy just becomes a money swapping machine between 7 VC groups.
The demand for paid software is decreasing cause these AI companies are saying "Oh dont buy that SAAS product because you can build it yourself now"
SaaS is not just software though, it’s operationalized software and data management. The value has increasingly been in the latter well before AI. How many open source packages have killed their SaaS competitors (or wrappers)?
As much as I appreciate the difference between literal infinity and consumers' demand for software, there's just so much bad software out there waiting to be improved that I can't see us hitting saturation soon.
This reasoning is flawed in my opinion, because at the end of the day, the software still has to be paid for (for the people that want/need to make a living out of it), and customers wallet are finite.
Our attention is also a finite resource (24h a day max). We already see how this has been the cause for the enshittificaton of large swathes of software like social media where grabbing the attention for a few seconds more drives the main innovation...
the demand for software has increased. The demand for software engineers has increased proportionally, because we were the only source of software. This correlation might no longer hold.
Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.
Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.
Those are huge businesses, even if they are not present in the HN bubble.
> "I need a website for my restaurant" kind, or the "I need to aggregate data from these excel files in a certain way" kind
I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.
I wonder exactly what you do, because almost none of your comment jibes with my knowledge and experience.
Note that I own an agency that does a lot of what you say is “solved”, and I assure you that it’s not (at least in terms of being an efficient market).
SMBs with ARR up to $100m (or even many times more that in ag) struggle to find anyone good to do technical work for them either internally or externally on a consistent basis.
> I am afraid that kind of jobs were already over by 2015.
Conceptually, maybe. In practice, definitely not.
> There are no code website makers available since then
… that mostly make shit websites.
> and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point,
Also almost certainly a shit website at that price point, probably using the no-code tools mentioned above.
These websites have so many things wrong with them that demonstrably decrease engagement or lose revenue.
> its so efficient even AI wont be more cost effective than that.
AI will be better very soon, as the best derivative AI tools will be trained on well-developed websites.
That said, AI will never have taste, and it will never have empathy for the end user. These things can only be emulated (at least for the time being).
> If you have $10k saved you can hire a competitive agency to maintain and build your website
You can get an ok “brochure” website built for that. Maintaining it, if you have an agency that actually stays in business, will be about $100 minimum for the lowest effort touch, $200 for an actually one line change (like business hours), and up from there from anything substantial.
If you work with a decent, reputable agency, a $10k customer is the lowest on the totem pole amongst the agency’s customer list. The work is usually delegated to the least experienced devs, and these clients are usually merely tolerated rather than embraced.
It sucks to be the smallest customer of an agency, but it’s a common phenomenon amongst certain classes of SMBs.
> This business is completely taken over by low cost fiverr automators and agencies for high budget projects.
This is actually true. Mainly because any decent small agency either turns into one that does larger contracts, or it gets absorbed by one.
That said, there is a growing market for mid-sized agencies (“lifestyle agencies”?).
> Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store
As mentioned above, you absolutely do not want to be a mom and pop store working with a web agency that works with any large, international brand like Adidas.
I appreciate your points from a conceptual level, but the human element of tech, software, and websites will continue to be a huge business for many decades, imho.
anecdotal at best but I have directly heard CTOs - and hear noise beyond my immediate bubble - talk about 10x improvements with a straight face. Seems ridiculous to me, and even if the coding gets 10x easier the act of defining & solving problems doesn't #nosilverbullet
I perform software engineering at a research oriented institution and there are some projects I can now prototype without writing a line of code. The productivity benefits are massive
Prototypes are always meant to be thrown away though, someone's going to have to redo it to comply with coding standards, scaling requirements, and existing patterns in the code base.
If the prototype can be just dropped in and clear a PR and comply with all the standards, you're just doing software engineering for less money!
> It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.
What’s “the vibecoding strawman”? There are plenty of people on HN (and elsewhere) repeatedly saying they use LLMs by asking them to “produce full apps in hours instead of weeks” and confirming they don’t read the code.
Just because everyone you personally know does it one way, it doesn’t mean everyone else does it like that.
I'd assume the straw-man isn't that vibe-coding (vbc) doesn't exist, but that all/most ai-dev is vbc, or that it's ok to derail any discussion on ai-assisted dev with complaints applicable only/mainly to vbc.
Neither of those would be a strawman, though. One would be a faulty generalization and the other is airing a grievance (could maybe be a bad faith argument?).
Though I get that these days people tend to use “strawman” for anything they see as a bad argument, so you could be right in your assessment. Would be nice to have clarification on what they mean.
Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man; I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
> Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man
Good point.
> I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).
There I partially disagree. Straw-manning is not engaging with the argument but it can be done accidentally. As in, one may genuinely misunderstand the nuance in an argument and respond to a straw man by mistake. Bad faith does require bad intent.
Half strawman -- a mudman, perhaps. Because we're seeing proper experts with credentials jump on the 'shit, AI can do all of this for me' realization blog post train.
Well, I have a lot of respect for antirez (Redis), and at the time of my writing this comment he had a front page blog post in which we find:
"Writing code is no longer needed for the most part."
It was a great post and I don't disagree with him. But it's an example of why it isn't necessarily a strawman anymore, because it is being claimed/realized by more than just vibecoders and hobbyists.
> Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.
There's a crazy amount of hype, fear and blatant lies in the mix. And the pace is absolutely bonkers. The pace of announcements is even more bonkers. Maybe things will settle down to a new normal at some point.
You might think that everyone has FOMO or is an anti-AI Luddite when of course there are a LOT of us somewhere in the middle, just trying to get our work done and trying to figure out what our careers will look like in 5-10 years.
One big thing that no one seems to talk about - GenAI is unlocking many new (and oftentimes "small") business ideas that were not practical just a few years ago. I have witnessed this firsthand. . . however, it will also take away jobs. How many, who knows?
tl;dr everyone is full of shit or selling something or terrified to the point where they can't think straight. And no one has a crystal ball.
Yeah I also sense this disconnect between the reality and hype.
In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.
But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.
But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. I think for those people, it does feel like a novel automation tool. It's just that we all already knew how to do this, by writing code. But most people didn't know how to do that. And now they can do a lot more.
And I think this is a genuine step change that will have a big effect on our industry. Personally, I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.
Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team unless he means in generating lines of code, which is to say tech debt.
Being able to put a lot of words on screen is not the accomplishment in programming. It usually means you've gone completely out of your depth.
I think it does both: you can have an LLM automate bad coding (that's the vibe coding part), and you can have an LLM speed up good coding.
Many times, bad code is sufficient. Actually too many times: IMHO that is the reason why the software industry produces lower quality software every year. Bad products are often more profitable than good products. But it's not always for making bad products: sometimes it's totally fine to vibe code a proof or concept or prototype, I would say.
Other times, we really need stable and maintainable code. I don't think we can or want to vibe code that.
LLMs make low-quality coding more accessible, but I don't think they remove the need for high-quality coding. Before LLMs, the fraction of low-quality code was growing already, just because it was already profitable.
An analogy could be buildings: everybody can build a bench that "does the job". Maybe that bench will be broken in 2 months, but right now it works; people can sit on it. But not everybody can build a dam. And if you risk going to jail if your dam collapses, that's a good incentive for not vibe coding it.
i notice a huge difference between working on large systems with lots of microservices and building small apps or tools for myself. The large system work is what you describe, but small apps or tools I resonate with the automate coding crowd.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
All I know is that firing half my employees and never hiring entry level people again nets me a bonus next quarter.
Not really sure why this article is talking about what happens 2 years from now since that’s 8 times longer than anything anyone with money or power cares about.
Hmmm I know this it’s true because if management only thought quarterly, no one would ever hire anyone. Hiring someone takes 6+ months to pay off as they get up to productivity.
I can't tell if we're doing like a sarcastic joking thing where we're making fun of management, or if you really believe this. If we're joking around, then haha. If you really believe this to be true, then you have a warped view of reality.
The street cred doesn't come from managing more resources, the street cred comes from delivering more.
He’s keeping some around so he can fire half again next quarter for another bonus. That’s the sort of forward-thinking strategic direction that made him the boss man.
I’m doing both. For production code that I care about, I’m reading every line the LLM writes, correcting it a lot, chatting with an observer LLM who’s checking the work the first LLM and I are writing. It’s speeding stuff up, it also reduces the friction on starting on things. Definitely a time saver.
Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.
I'm somewhere in between myself. Before LLMs, I used to block a few sites that distracted me by adding entries in /etc/hosts file to mapping them to 127.0.0.1 on my work machine. I also made the file immutable so that it would take a few steps for me to unblock the sites.
The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.
I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.
I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).
Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).
I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.
Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).
Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.
So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.
So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.
It seems alright, but I wonder if it crashes the economy for vast majority of internet businesses. I personally run some tool websites like ones to convert images, cut videos but the traffic for now seems stable, but my tools don't target devs. Most likely you didnt actually need it, but who am i to judge, I just find myself doing random projects because it "takes less time".
The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
I don't think that works. The fact that it can produce different output for the same input, usage of tools etc. don't really fit into the analogy or mental model.
What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.
I feel like some of the frontier models are approaching run-of-the-mill engineer who does dumb stuff frequently. That said, with appropriate harnessing, it’s more like go-karts on a track; you can’t keep them out of the wall, but you can reset them and get them back on a path (when needed). Not every kart ends up in the wall, but all of them want to go fast, so the better defined the track is the more likely the karts will find a finish line. Certainly more likely than if you just stuck them in a field with no finish line and said “go!”.
>We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
My experience (with minfx.ai) has been that it is very important to build a system which imposes lots of constraints on the code. The more constrained you can make it, the better. Rust helps a lot in this. Thanks to this, for the first time in my career, I feel like the bigger the system gets, /the easier/ it is to develop, because AI can discover and reuse common components. While human would struggle searching for these and how to use them in a large codebase. Very counter-intuitive!
If by block by block you mean you stop using an IDE and spend most of your time looking at diffs, sure. Because in a well structured project, that's all you need to do now: maintain a quality bar and ensure Claude doesn't drop the ball.
Neat. I'm seeing a lot of overlap with books mentioned on r/reddit. I didn't realize, until know, how demographically similar hacker news and reddit are.
HN used to be a site for entrepreneurs to share ideas and work on things. Now the far left Reddit crowd has crashed it and anyone who has a successful business is just "lucky" and anyone who has earned wealth should have it stolen at gunpoint by the government to redistribute to those who don't produce anything.
If I compared HN today and Reddit 5 years ago, I'd agree, but I'm still extremely grateful for HN as I tried looking at Reddit this year and it actually made me feel like there's an extremely misinformed, radical, brainwashing happening there. I've never seen so much misinformation and negativity in one place aside from Truth Social or Threads.
HN today is equivalent to Reddit 5 years ago: not as great as it was when smaller 10 years ago, but still better than Reddit today.
Back then, all jobs moved at a much slower pace. There was a lot more off time during work hours.
reply