Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stack Overflow announces 28% headcount reduction (stackoverflow.blog)
119 points by brycewray on Oct 16, 2023 | hide | past | favorite | 171 comments


It's incredible how this community here, which is mostly against Discord as a Forum because it's not indexed and publically searchable, are so into ChatGPT as opposed to an indexable platform like SO.

Yeah, as someone who moderates a 200,000 member community, moderation is difficult. They do a pretty good job considering what amount of shitty, low value, googleable questions they get.

What do you all do if ChatGPT's prices rise? Or you're using a technology it knows nothing about? Back to IRC channels and reading docs? Where will you copy paste your code from? Cling to it and hope an alternative comes along, all the while shoving more money into closed models?


> They do a pretty good job considering what amount of shitty, low value, googleable questions they get.

Yeah see this is the problem. Fuck that man. The vast majority of people who want to ask questions are very new. They barely know how to phrase the question they’re trying to ask. If they knew how to ask it, they probably wouldn’t need to post a question. They could search for it. SO is full of elitist pricks who are very rude to people who are trying to ask a question and don’t know how and are feeling frustrated. It’s a beginner hostile environment. And that’s not intuitive. Because you DO end up hitting stack overflow for a lot of basic questions from 10 years ago, and that’s nice, but it is bad for people who lack the expertise to navigate that historical index.

Discord is not indexed, but it’s largely full of people who are willing to help newbies construct their question and then answer it. It’s night and day a better experience. Same for chat gpt.

I have spent a fair amount of time on the unreal engine discord helping what I assume are kids who have never done any programming before realize that their questions make no sense and that they don’t know what a variable is. They are extremely thankful.

SO just makes people feel bad for not being experts on the topic for which they are seeking help to learn.


> They barely know how to phrase the question they’re trying to ask. If they knew how to ask it, they probably wouldn’t need to post a question

this is a big one. often you know what you need to do, but knowing what to search is hard.


I wonder if a GPT powered SO would be helpful to cut down on newbie questions. I've found a lot of success ingesting an internal FAQ/documentation and wrapping it up in a little chatbot. Then asking the bot questions about it. It works better than control-f or some sort of fuzzy searching, and allows you to ask more general questions but still get specific answers. If the bot doesn't know then you can safely assume the question hasn't been asked, or the question is different enough to warrant asking it on SO



When I was starting out learning to program I found ESR's how to ask questions[0] guide invaluable.

Half of the time just thinking about trying to ask a good question will usually lead you to a solution, or if nothing else if you're not finding an answer through searching you're probably just asking the wrong questions because the odds you're solving some novel, undocumented problem is pretty much 0.

So usually it's more of thinking one knows what they need to do, just to learn that they didn't actually know what they needed to do at all once unknown unknowns are unearthed. That's where something like Discord shines because it provides an avenue where extremely patient people will do some prodding of a newbie to help figure out what it is they're actually trying to do so they can be pushed in the right direction. But that's also a double-edged sword because searching Discord is extremely difficult and has a high barrier to entry so you're often going to be faced with the same questions being asked over and over.

[0] http://catb.org/~esr/faqs/smart-questions.html


It isn’t just beginner hostile. The more niche and advanced your question is, the more likely it is to be incorrectly marked as a duplicate.

Besides that, the answers you do find are often not helpful. For example, “just use this function in jQuery” - I’m asking about JavaScript, not jQuery.


> Besides that, the answers you do find are often not helpful. For example, “just use this function in jQuery” - I’m asking about JavaScript, not jQuery.

Since jQuery is open source, I do consider this to be at least a helpful step, since you can then simply look at the source code of the respective jQuery function.


This realization hit me after I wrote my original comment. Somehow never crossed my mind. I’ll definitely try this next time.


This. The more experienced you get the less you are visiting or relying on SO but on other sources. As a junior you usually don't know exactly how to ask or what to ask in order to make progress.


In my mind this was never what SO was for, though - its for specific questions, that are asked well. Unlike google.


Perhaps.

But I doubt this is a value prop people want.

Edit: sufficient to sustain SO in light of new competitors


That’s probably because SO has been useless for a long time now.

They got so uptight about moderating content that they seem to have forgotten their actually user value. Instead of focusing on helping users find more relevant content, they’ve decided they must reduce any possible duplicate or overlapping content.

For me, I’ve found most of my issues are complex and nuanced. I want the duplicate content. I want to read 5 posts with slightly different variations of the same issue. They all provide contextual clues and variations that are useful for solving my specific issue. Instead, they all get forced into the same post, resulting in a bunch of absolutely useless junk. Top that with the fact that some information is actually stale, it’s basically impossible to find a solution now.

ChatGPT often gives me solutions that are mostly correct. Even when they’re wrong, they often get me close enough that I can read the docs.


It's a classic perverse incentive. "Moderators" generally only get credit when they "moderate" stuff. If they don't "moderate" a lot, it looks like they're doing nothing.


The stuff Copilot helps me with is not the stuff SO could help me with.

Check out 6:05 and 6:25 in this Andreas Kling video: https://www.youtube.com/watch?v=8mxubNQC5O8

That's where Copilot shines, not for locating and copypasting answers from Stack Overflow.


Right. But this is about ChatGPT which is getting used more like a replacement for SO due to the lack of integration and the ability to answer questions.


> It's incredible how this community here, which is mostly against Discord as a Forum because it's not indexed and publically searchable

Discord sucks as a community for open source/software development for more than those reasons. The major one, I find, is the population of gamer/anime users that dominate it - I feel alot of serious developers stick to slack or mailing lists for that reason. Slack's identity is clear - enterprise/business chat and communication. Discord's is a bit more confused and alot of people straight up haven't even heard of it.


"What do you all do if ChatGPT's prices rise?"

Pay for your tools? This is not exactly a community of low-income people here.


With how good local models are getting this might not even be necessary. Code Llama derivatives aren't quite there yet for me but if the $20/month meant much to me they would be.


I'm not so sure using local models is a cheaper alternative. Computation/electricity is not free and OpenAI is heavily subsidised. It feels like people think local models are free just because they don't need to pay a third party directly.

I think the real benefit of local models lies elsewhere.


> Pay for your tools? This is not exactly a community of low-income people here.

The prices of such tools are often oriented towards people who have high-paying job in Silicon Valley. Not everybody who reads or posts on Hacker News belongs to this group. :-(


A lot of people here have no income and are trying to get into software (or working on their pre-revenue startups).

Especially the people who need a better Stack Overflow, are likely not to have huge employee salaries.


Students... :/


It's incredible how this community here, which is mostly against Discord as a Forum because it's not indexed and publically searchable, are so into ChatGPT as opposed to an indexable platform like SO.

I don't think you can say this is a community with opinions. Rather, there are various subgroups of people with strong opinions and certain posts attract one subgroup and certain posts will attract others. You can find a whole lot of people telling you building an app "on top of" an LLM is a dumb plan because they aren't reliable. And you can find a lot of people telling you to delegate all your drudgery to an LLM. That might seem a bit contradictory but it's really slightly different topics pulling in slightly different people.

And in the this case, there are lots of people willing to tell you bad things about SO. The story for such bad feelings is that SO was fun to do answers for and now it's really miserable. It may or may not sometimes still be useful in an objective but it's accumulated massive bad feelings. So just about any SO thread attracts the SO-hating subgroup rather than any other subgroup.


Just keep using 2 year old versions of software it knows about.


I feel like moderation will be one if the first things automated away.


We could all export our ChatGTP discussions into a local application which uses AI to filter out any PII, reformat and upload coding-related discussions to a SO alternative.


At that point you've just recycled the original training data :/


Enriched with a guiding discussion in order to archive the goal.


Back to working out the solution yourself?


That's how most of my work and hobby projects go, but I hear there are people copy pasting code and refusing to read documentation.

Working it out yourself is always going to be a good way, IMO


I think that depends on the problem.

My main use of SO is figuring out badly documented DSLs or configurations or magic annotations that have nothing to do with programming per se.

I can dig into a language I don't actually know and change a value or fix a trivial bug, as long as it has variables, methods and loops.

But I can't guess if @Zorpify is the magic incantation on a method that does something I don't know the name of.


No wonder. If I ask ChatGPT a question, it answers it.

If I asked Stack Overflow a question, they made sure I wished I hadn't.


My experience differs:

- Ask ChatGPT (or a similar AI chat bot) a question, it at best gives some at most semi-true answer. In the best situation, I can immediately see myself where the mistake in the answer is.

- Look at Stack Overflow: very often find an answer

- In the rare situation that I cannot find an answer on Stack Overflow, and ask a question: get a good answer (often many good answers) after a few hours that let(s) me realize that I have a lot to learn about the respective programming topic, which I will do.


Very often find an answer

- from 10 years ago, and an entirely different version of the language/library

- from more recently, but it's been closed as a duplicate

- with a specific caveat, but that guy ignored and the question was also closed as a duplicate.

- with a wrong answer, that you'd like to post a correct answer for, but you haven't karma farmed enough


Pay for GPT 4 and use that, it's much better than 3.5


Doesn't ChatGPT probably regurgitate stackoverflow answers? If stackoverflow doesn't exist, ChatGPT probably also gets worse as it doesn't have the data necessary to be able to answer questions.


With full access to github and github issues it does just fine with whatever knowledge it has


You mean the data that Stackoverflow was given freely by millions of people? They have no more moral claim to it than your local park has to the content of conversations spoken within its limits.

Besides, Github, Gitlab, Bitbucket, etc. have far more public data to train on, Stackoverflow only really provides the schema one would want for fine tuning to make sure the answers are in the right general question-code-answer-code format.

Once that is trained properly (see: to the understanding level of compression, not just memorization) you only need to update the model on new libraries and languages, which can be trained from repos and docs directly.


They provided the platform and the medium by which questions get answered. That’s not free, and expecting to get it for free isn’t realistic


Plus they already provide the data powering the site(plus the site code itself I think?) with a lenient CC license. What more can we ask for? I don't get why people are so hell bent over screwing over a good citizen.


Getting things for free isn't just realistic these days, it's genuinely become the norm. Competition is infinite and there's always somebody willing to bleed VC capital to offer things for free until they finally attempt proper monetization and everyone jumps ship to the next one that's earlier in the cycle. The one unicorn that does successfully monetize then subsidises all the rest through further VC funding in a roundabout startup UBI of sorts.

Besides, Stackoverflow runs ads and profits directly from users' posts. You don't get to own things just by the virtue of being a platform that lets people host things. Does Github own all the people's code they host? Does a cloud service own the software people put in their containers? Of course not. Stackoverflow themselves state that each answer is the property of the poster, licensed as CC BY-SA. Even Reddit's TOS states that "You retain the rights to your copyrighted content or information that you submit to Reddit".

Now whether there should be a class action lawsuit including everyone who's ever posted on Reddit or Stackoverflow against OpenAI is another story.


People answered for free because they had a CC license that ensured that if they tried something evil then the comunity would be free to fork it.

They ran adds to cover the servers and development cost. A fork must find how to get money.


It could interpolate from the original source - the full code of open source codebases and their docs.

It'd only get worse in cases where only a human had access to the knowledge or the means of attaining that knowledge.


Remove stackoverflow from the training and revise your assumptions. ChatGPT is not connecting the dots based on source code and docs.


While I’ve had SO mods criticize my questions, I’ve more success getting correct answers to questions from SO than ChatGPT. With SO, I also have the benefit of knowing answers have been mostly vetted by others. If it’s wrong, someone will comment on it usually. Not so with ChatGPT


It just depends on the questions. I have gone from asking 2 or 3 questions per months (and reading 100s per months) to using ChatGPT for all of it. Because it gets the questions better and the response is immediate.


GPT 3.5 or 4?


You know, I haven't heard anybody talk about this yet! I was wondering, is gpt4 better than 3.5, or worse?


It's miles ahead of 3.5, the difference is really really big. Mainly because GPT-4 is just so much better at "understanding" and logic, it follows everything you ask for or say very, very closely.


Incredible! I am sure its amazing to use. That said, I doubt it could, say, answer questions from an LSAT exam and get them right though. That would just be way too unbelievable to me.


I'm not sure if you're joking or not, because OpenAI actually used LSAT as one of the examples where GPT-4 is much better - https://openai.com/research/gpt-4

https://files.catbox.moe/l5n5gd.jpg for some of the results in their technical report


Amazing! How lucky we are to be alive for such advancements in Technology. They are really doing something special at OpenAI.


Chatgpt4 is better than 3.5, but I think it’s also because 4 uses larger models because you are paying, while 3.5 has been cut down. I still find that 4 will give bad technical answers … for example, ask it to help you write a Caddy reverse proxy config and it will give you garbage very emphatically


Gotcha. I usually use nginx anyway, so no worries there.


Much better


Wow!


I was a bit amazed to see this same antisocial behaviour in the gaming stack exchange. “Obviously you haven’t read the manual (for a 20+ year old game) so I’m closing this question.”

Is there something about stackexchange format that accidentally encourages this? I go on various discords (learn piano, javascript, typescript, various games) and the absolute worst treatment I got was nobody replying.


I don't think it's accidental. The format is strongly optimised for producing question/answer pairs that are later searchable by people who have the same-ish problem. Among other things, this means getting rid of unanswered questions that would otherwise clutter the search results.


Yep. SO wants to be a wiki, but in Q&A format. Its rules are designed not primarily to be a Q&A site, but rather a searchable knowledge base. But the interface is largely designed around making it look like a Q&A site, and this conflicting purpose causes a lot of the negative interactions (IMO).


This is absolutely false. Yes, there is a bit of a learning curve on how to phrase questions on stack overflow, but it mainly comes down to "ask a clear question". SO certainly still has better answers than ChatGPT, and I say this as someone who has been talked-down to on SO before. I understand where they are coming from. At the end of the day, you get nice, clear, tagged questions with proven out answers. Way more than you can say for ChatGPT, which generally gives random trash.


The odd thing is that I really tried to help people for a while on Stack Overflow but if you answer somebody's question you rarely get an accepted answer or even any reply at all. "Sorry buddy, that didn't work" or anything. Lots of people just dump and abandon their questions.

Most of my reputation gained there is because I answered my own answer and ironically - just once - copied a solution from a thread on GitHub and pasted it there.


Almost all my reputation is from a question [1] that I just happened to see and answer before anyone could close it as "bad quality" or "duplicate".

Had I been a minute or two later I suspect it would have been closed as poor quality.

I only spotted it by pure chance because I had just wondered the same thing before having a "duh" moment when I realised it was a really simple solution to what sounds complicated (but isn't).

So I was able to interpret the "real" question and answer it.

But, as you say, never a bit of feedback from the question asker. No acceptance, just abandonment.

It just slowly gathers karma, around 10 per month for ~10 years.

[1] https://stackoverflow.com/questions/27607516/sql-join-with-c...


> they made sure I wished I hadn't.

Which says more about your style of asking than about the SO community.


Not really. There are dozens of reasons why you would want to do some thing the non-standard way and not "the one recommended" way, but it's mostly impossible to ask that on SO because 3 people will jump in to tell you "no you shouldn't".

You could try a "I know I should be doing X but I want/need Y" disclaimer (also often useless) or spend writing half a novel, to only get told "can you make that a shorter repro?" or back to square 1, "you shouldn't do that'.


That says more about the fact you don’t have to bother about the style when you’re asking ChatGPT.


Because it is not resource-constrained.


And people white knight for that behavior left and right.

I refuse to interact with the platform after these encounters.


How is it that I asked like 100 questions in last 5 years and hostile behaviour was visible in maybe 2 or 3, yet people act as if that was the most toxic place on the internet?

All I was doing was just: putting effort into my questions


Probably because your brain works in the same way as the mods think, and so you ask questions that fit the expected format. Or your questions are focused with one right answer, because IME SO hates any question which assumes - unintentionally or intentionally - there are grey areas or more than one way to do it.


The answer here is that "putting effort into my questions"

A lot of questions show people have not done that by either reading manuals or searching SO.

Why should a answereer put effort into writing a good answer when the questioner has not shown any effort?


A broader problem is that some people do not really want one answer to one question but basically some tech support to solve their problem. Either they then ask a series of separate questions (not always fun times) or try to engage in the comments (which then get deleted!).


I would not want to white knight for SO, but perhaps grey knight?

My understanding of the reasoning behind some of the moderation - flawed as that often is - is to make the questions and answers 'general'. Which means a question that is very specific to one person may be voted to close or voted down.

To take an example of another stack exchange site (chemistry.SE), they get a lot of 'homework questions'. Literally some questions are like "I have an exam in an hour PLEASE HELP". The stated policy is _not_ to just answer these questions, but to guide people towards a solution. Very frustrating for the person asking the question - who, you know, has an exam in an hour ...

Ultimately there are policies and processes intended to improve the overall quality, and sometimes that is a good thing and sometimes it leads to everyone getting annoyed with the whole thing.


"General" isn't right, "searchable" is.

A great SO question ① is about programming, ② has a factual answer and ③ if other people need that answer, then their search terms match the wording in the question. It's that simple. Answering that kind of question is worthwhile for people like me, because many people will later search, find the question and benefit from the answers. Spread the knowledge!


The issue is that power tripping mods prevent many questions from becoming great ones.

And I disagree that questions need to have one factual answer. Great questions often have many highly ranked answers showing different ways to solve issues or emphasizing different aspects of solving the question.

A great question is one which others also have.


1. I didn't write "one" and I didn't write "a single".

2. It sounds as if you think the moderators and the answerers are disjoint sets. They're not, at least not in the tags that I follow. The power tripping mods are the same people who contribute a majority of the answers. The decision of whether to answer or close is one decision, not two decisions made by different groups of people.


Okay, you wrote 'a'.

Mostly, moderators and answers feel like disjoint sets because the majority of answers come from the long tail of people (at least it feels to me).

But likely it is similar like with Wikipedia where the moderators feel like that they do most of the work when in reality they really don't and others would be happy to do the work if the moderators wouldn't hamper them.


I provide answers for a few tags. Most of the moderation acts on those tags are made by people whose names I've seen time and time again on answers and comments. I see a bad question in my feed, click to close it, and see 'closed by the votes of' and the names are names I know.

There have been exceptions, e.g. we asked a site moderator to wipe someone's bas64-encoded password from a question. The user probably changed password, but we got a site moderator to wipe it thoroughly without waiting for that. I know those site moderators do a ton of work, but the power trip behaviour you see as questioner is done by answerers like me.

The site moderators do things like cleaning up vandalism or spam, when someone uses a script to add outgoing links to a thousand pages.


> a question that is very specific to one person may be voted to close or voted down

That's the opposite of the guidance, or at least the guidance as it was back when I was active on SO.

General questions "What's the best way to do X" are shut down.

Questions should be specific and about actual problems you have rather than general problems.

Solving real problems with real solutions builds up a strong data set, not questions about imagined problems and their answers


My impression of stack overflow is they were looking for questions in a Goldilocks zone.

"How do I sort a list of US States in Python 3.10?" Too specific.

"How do I sort a list of strings in Python?" About right.

"How do I sort a list with my computer?" Too general.


That's fair - maybe I expressed it badly or just that it's hard to come up with succinct rules for questions.

Consider two scenarios : a) I have installed some software X but messed up one of the install steps and now there is this weird error. Or b) when installing this software, it always errors if library Y is on an old version.

For b) it is clearly a 'general' problem that many people will face. For a) it could be seen as a user error and therefore not generally useful. From the point of view of the people facing a) though, it absolutely is a problem!


"The best way" type questions would still create a ballot with the best solution probably rising to the top, for use by others who encounter a similiar situation. People aren't always starting from a very specific problem, but from a vague general one.

Ultimately though, Stack Overflow just faced the customer service curse: as volume rises, moderation gets blunter.


SO doesn't have such ballots for a reason: SO doesn't want opinionated discussions. If you think that choice is wrong, SO is not going to be a good place for you. But there are very many other sites where you can have opinionated discussions.

May I ask your SO id? It'll be a link like https://stackoverflow.com/users/1, except with a different number.


Really feels like Stack Overflow lost a huge bet for their cultural relevance in how late they were to embracing AI. An idea I saw at the time when ChatGPT was picking up steam was to generate an automatic AI to every question, then have contributors edit/vote up/vote down the AI response - basically treat it like any other response but have it built in. Wonder where it would be if that approach was taken.

Instead of anything that, they let mods use a bullshit sniff test, then changed their mind on that and got into an embarrassing war with their mods. Obviously there's a little bit of empathy to be had for the mods in terms of dealing with generated answers, but the answer of banning them outright seemed very kneejerky and a plainly bad idea and entirely ignorant of where the winds were blowing. An awareness of how the mods were responsible for turning SO into the... interesting place it was at the time and being prophylactic about fixing the culture they had created towards the end of the 2010s would have helped prevent this.

Instead here we are, a year removed from the beginning of this and it really feels like the Stack Overflow is a much, much smaller part of my life than it once was. Sure, in a few years the training data for GPT will be worse, but it's not like I don't still find Python 2.x answers when clicking on SO links, and I can paste in the new docs, and not worry about whether some random moderator is having a good day or not.

It's sad, really. In time we'll see Stack Overflow's decline as preventable (at least to an extent), and view it as a case of what happens when the strong leadership that's needed to disagree with a loud minority is absent. Whatever you think SO should have done, I'm sure that you think what they did end up doing was not what was the best play for them.


It looks like they've had quite a few large scale rounds of layoffs: 15% in 2020 (https://www.businessinsider.com/stack-overflow-reduces-workf...), 10% in March this year (https://stackoverflow.blog/2023/05/10/a-message-from-prashan...) and now another 28%. That seems like far more than other companies that their employees could choose to work at in total.

I imagine that people are now going to demand a premium to work there - they might only be able to attract the kind of people that want to treat it like contract work and expect a premium for insecure work to cover their expenses while they look for the next contract. I imagine this will put the company at a significant disadvantage compared to where it was before. Finding any marginally profitable use for their surplus employees (even if it is building a different product or even taking on some contracting) while they wait for the workforce to shrink through attrition would probably have been better for their company long term.


The story of SO is pretty interesting. Looks like they only had raised $153M with the last round prior to the acquisition at $1.8billion. I was not even aware that Prosus has purchased them, must have missed it during Covid. Given that news by, I don't find this entirely surprising. Time to squeeze out as much performance as possible from the purchase.


"You've been marked as duplicate".


"Employing people is no longer considered best practice"


I sometimes teach those new to software development. ChatGPT seems to have supplanted Stack Overflow.

It's the same behaviour of course, blindly copy/pasting errors into ChatGPT, unquestioningly copying output back, and as far as I can tell the results don't seem to be that much better, just quicker.


I used ChatGPT recently to develop an iOS app (I'm an iOS dev, but I wasn't proficient with SwiftUI at the time). ChatGPT astounded me. When I copied something that didn't work, I could tell it "hey I get this error" and it would reply with the necessary changes. I was able to get an MVP together this way. I learn by doing and I learnt much more quickly using this method than reading guides/watching YouTube/reading docs. Until that point I didn't fully get the AI craze.


I’ve been interested in using chatgpt to learn SwiftUI but chatgpt’s cut off date scared me away. Did you face any issues?


Current cutoff date is Jan 2022. That's iOS 15 out for a couple of months which is good enough for most things in my experience. If your deployment target really already is iOS 16, the gaps can be filled with the documentation and other sources.


This was about 6 months ago and I had no issues that it wasn't able to solve. It might not know about the latest features, but I don't think that matters unless you're building something cutting edge. It'll try to show you how to build what you want using the features set it's familiar with. It's probably better now with the browse plugin too.


I'd argue that ChatGPT is worse, especially for beginners, for two reasons:

1) Nobody thinks that Stack Overflow is a magic oracle. Most people do think that ChatGPT is.

2) Quality of Stack Overflow answers can be quickly judged by looking at upvotes. Lots of upvotes does not guarantee that an answer is quality, but it is a signal that is very useful especially when you aren't capable of judging the answer yourself.


>It's the same behaviour of course, blindly copy/pasting errors into ChatGPT, unquestioningly copying output back, >and as far as I can tell the results don't seem to be that much better, just quicker.

This is actually a shame, as lots of very knowledgeable and helpful people have answered questions over the years on SO, and sometimes the comments and discussions are just as useful as the accepted answer.


> sometimes the comments and discussions are just as useful as the accepted answer

...but, mostly not.

Mostly, they're quite useless. ...and I say that with a considered opinion, not a knee jerk of irritation.

SO is fundamentally a site where up voting lifts relevant answers and down voting sinks irrelevant stuff.

However, comments are a side-step into non-voting, non-karma related content, that violates that fundamental premise. Comments simply give people a way to game the system; you can say unhelpful useless things, and the worst thing that can happen is the comment is flagged for moderator review. You can answer the question without it being a 'real' answer... only now it's harder to find.

The benefit of comments is entirely lost by the way they are abused in practice.

...so, realllly, if you want to chat about the solution, then chatGPT is the way to go, not SO. SO was never the place for that.

So.. you know. It's not much of a loss really, compared to what you get with a chat bot that's happy to talk to you patiently asking dumb questions literally forever. :)


Yes, it used to be like that, but these good people have left SO long time ago. People with higher scores started to harass lower scored people, regardless of the questions they have asked.

After a couple of "you do know nothing, go play over there" replies, I stopped visiting and participating in SO, and even started to avoid going there because the answers' quality started to go down as well.

They tried to solve it with a CoC, better moderation, etc., but they only slowed down the process a little. In my eyes, it became a free Experts Exchange filled with people with short temper.

If I knew better, or my half day search yielded better results, I wouldn't be asking with all that I know, in that poor form. Judging me because I'm not as knowledgeable as you is not a good way to help people, it seems.


Since ChatGPT relies on Stack Overflow and similar sites for training data, it's only really going to be as good what's already on the Web.

So it can't replace these sites in their entirety, because if those sites decline, so will the quality of ChatGPT responses. I suppose there will be some kind of mediocre equilibrium.


I'm not sure that's true. If ChatGPT reaches the point it can fully comprehend entire open source codebases and docs, of the tech that's the basis of a given question, it'll be able to guess solutions without human middlemen.


Best use case I've found so far for ChatGPT is using phind.com to search Stack Overflow and integrate the answers with whatever other relevant documentation it could find.

Sometimes it's a 1:1 copy of stackoverflow, sometimes it's a bit more. Either way it saves me a few minutes searching for the relevant post.


I definitely find them better, with gpt4 at least. The main hiccups are with the latest libraries/versions of things.

It beats searching various SO questions, some of which don't have answers, some are too old, some are hard to read.

The best part is being able to ask for further clarification, or feeding the answer into a fresh context chat to help find problems


I don't think it's better in terms of results, but at least it doesn't close your question as duplicate/irrelevant/opinion


Just like "the war in [insert country], is responsible for our woes", I feel that "AI™ is making StackOverflow redundant" is very incorrect and short-sighted. Even before this renewed AI frenzy StackOverflow has been failing to build a solid business. They tried ( and apparently failed ) to capitalize on job postings, which IMO is a miss on their part because given the traffic and eyeballs it's the perfect place for that. But either way..

The anal retentive people that we complain about, are the same ones who fed the AI. When they don't have a home to torment us, where will the AI get the answers? Training on code is "cute", but not enough.


Every time I read something like this I am wondering if I am just missing out or using products the wrong way.

Sorry to anyone from SO reading this, but I have been using the site for.. probably 10 years (or how long it has existed) as mostly a reader and there are 2 features. Have the answer to my google search ready. (It's two and not one because I also need to find it)

Why the hell does this company even have so many employees? It's a forum and I've not used any new feature. Person A asks questions, Persons B-Z answer. Someone fixes the bugs, updates the dependencies and someone else keeps the hardware and network running.

I suppose it's me. I don't need exciting new features, I don't need growth, I don't need an experience. For my needs SO could be run by 3 dude(tte)s in a basement, and as long as it's profitable enough for them, all is fine.


Instead of just iterating on a solid business another case of promising infinite growth, which in reality is just a different form of pump and dump. Fire the executives.


Since I have been running my company 2.5 years now, a quick reminder: employees expect yearly salary increases, tech employees have very high salary growth expectations. You can explain to me how to make employees happy without "infinite growth", as you are there, please also help me understand how a company is to secure their sustainability without growth with expected positive inflation.


I would argue that the salary is not following the infinite growth, not even close for employees. Sure, for startup there is a path ahead for increase, but I bet Google, Amazon, Meta or any big company are not increasing the salaries anymore as much as these companies are generating increased revenue over every year.


> You can explain to me how to make employees happy without "infinite growth"

Autonomy so they dont get a do nothing manager that wants meetings to discuss upcoming meetings.

Treat them like adults - also hire adults in the first place. Throw out well poisoners early on. Bullies, sexists, racists, over-competitive twats, ass kissers, boot lickers, etc if you somehow hired such types - throw them out too.

Pay must be market rate but doesnt have to be 500k for an api plumber.

Remote work so they buy a property they afford, settle down and raise a family.

And so on.


Right. These types of conversations should always trend toward addressing systemic problems with our culture and the infinite growth mentality. It can be stemmed but until our monetary system is no longer based on interest and debt, eventually we will all face a reckoning.


Would the prices you charge not also track inflation?


Efficiency increases could lead to less person hours being necessary, thus allowing to do the same with less people, or the same amount of people working less per week.


> Since I have been running my company 2.5 years now, a quick reminder: employees expect yearly salary increases, tech employees have very high salary growth expectations.

This is in opinion a very US-centric observation/expectation.


OP can’t and obviously has never run a business.


> Over the last 15 years, _we’ve_ built Stack Overflow into an industry-crucial knowledge base for millions of developers and technologists.

If you continue reading, he doesn't mention the community as the most important part of the "we" at all.

Added as a side-note to not be so obviously greedy:

> To our community members and customers reading this note: you are foundational to our success.


When you put all your money into a already hated product so no money is left for your employees.

Great Work Stackoverflow executives!


What product? Their B2B stuff? Seems useless to me but is it hated?


Probably OverflowAI



You would think that companies would take the easy step of not using internal company nicknames in these press releases (Stackers.)

Just “employees” would make it sound less tone deaf.


Agreed, and I say "people" more often than "employees".

"Employee" has come to mean transactional and disposable, and I don't know how many leaders still feel obligation towards employees.

Maybe thinking of people as people, rather than as what "employee" has come to mean, will activate feelings more like "our people" or "my people".


I didn't bump up against "stackers" but I certainly did bump up against the post pretty explicitly saying their community members were not their customers. (I assume their advertisers are).


It always amazes me how stackoverflow became expertexchange, the very thing it replaced. Is it just inevitable as something grows?


The bigger something gets, the more likely it is to hire people who turn out to do things that don't add value.


The problem with Experts Exchange was that the answers were put behind a paywall (I am aware there existed some tricks to circumvent it), and were thus hard to access. I am not aware that Stack Overflow is currently on its way to become like that.


You mean expertsexchange.


Layoff announcement. 2 paragraphs. 3 mentions of AI.


Like a bad apology video on YT where the artist try to sell you his product while crying.


Might explain why the moderator strike never ended


I've noticed searching the recent questions, the answer count and views are really low, there just seems to be not a lot of people on the site answering questions anymore

There is a big message telling users not to use AI generated content for answers

I'm starting to wonder if the no AI answers rule is a good idea


Moderation got extremely out of hand, to the point people are demotivated to ask questions. No questions, no answers, no traffic.


I stopped viewing due to LACK of moderation.

There were too many questions asking the same thing that had been answered many times before.

Too many where the answer is RTFM.

SO used to have interesting questions, questions that I needed answered or that I found interesting to spend time on to research.


> Too many where the answer is RTFM

I think that is precisely the problem. A kid is teaching himself to program. He has an error which a million people before him have faced. He types it into google and gets SEO bullshit. He asks stackoverflow and they close his question immediately as a duplicate of a question which, to him, looks very different. He asks ChatGPT and he gets an answer immediately.

Where does start next time he has a problem?


It should be with the manual but current ones are not that good.

So it will be with the possibly wrong answer from ChatGPT.

When learning nothing replaces a book or tutorial that has had some proof reading or editing.

Just picking up small bits from the web here and there does make things difficult and does not really teach you.

Learning in general needs you to fail and think not just google an answer.


> SO used to have interesting questions, questions that I needed answered or that I found interesting to spend time on to research.

IMO a lot of the interesting questions got moderated away. Almost all of my favourite SO questions are closed as being off topic or subjective. I do think that the community struggled to scale with influx of a huge volume of users. But I also think that the strategy that they chose to deal with it was ultimately overzealous.


By interesting question I mean that ones that I did not know and were not a simple read the manual. They would be ones I would have find fun in researching the answer.

Your definition of interesting is a discussion which is not what SO is for - even if they did then they do not have to tools to help this e.g. threading being able to vote on interim comments etc.


No wonder LLMs ate their lunch. If questions about contents of the manual are the only ones allowed, you are literally better off just asking Bing Chat.


Everything should be in the manual - but it is not nowadays. Many projects just say read the code which is difficult to first find the correct code and then understand it as the code might be more complex than the uisers knowledge or be in a different language.

Even if it is it is often difficult to put the individual facts together and that is what a SO answer will do.


If you have AI answers then you get incorrect answers - why would this be good?


How would this be different from not allowing Ai?


I think because the effort bar becomes too low when providing answers.

The gamification of the Stack Exchange sites is a double-edged sword, and I think it relies on an assumption about the effort threshold required to provide an answer.

Typing out an answer, even a wrong one, takes time and effort that the system then rewards through points and badges.

If you can just copy a question to Copilot and then paste the answer back, the effort threshold is likely then low enough that too many (or at least lots) of people will do only that in order to engage with the gamification and get points, rather than do it to try to help out the asker.


Volume.

Most SO answers tend to be near the correct way as they explain why they answered.

A good answer should also gove references as to why the answer is correct.


Maybe it's not a good idea for the company. However, avoiding the echo chamber feedback loop in AI training data is absolutely a good idea for society.


They do advertise they want to do a lot of stuff with AI, while they explicitly do not allow AI in their answers. Sounds pretty hypocritical to me.


No it is not.

The issue is that current AI answers invent 'facts' or produce wrong code. AI might help improve how people questions and possibly help them find an answer e.g. improve searching but to answer you must be correct and give correct references - not false ones as have been found in legal cases and others.

How to use AI is an open question.


The views being lower is partially due to StackOverflow implementing a legal cookie confirmation dialog, that is as easy to reject cookies as to accept them

You need cookies to count the views.


You don't need anything on a user's machine to look at your server logs and see how many requests you've had.


Better tell Stack Exchange that. See answer https://meta.stackexchange.com/a/391625/136010 and a StackExchange employee saying the page count is down due to the cookie changes


> To our community members and customers reading this note: you are foundational to our success.

I love SO, but I really wish it would have become some community funded / non profit entity. Who wants to help these private equity firms?


Not much much empathy in that announcement.


But there's AI!


The post mentions the "successful launch of OverflowAI" which is something I had forgotten existed. For context the HN post for it is below, in which most people here are cynically dismissive of it or just confused about what it is supposed to offer.

Is it cynical to wonder what "successful launch" means in this context, or just HN being overly dismissive or out of touch?

https://news.ycombinator.com/item?id=36892311


In this market, can they also expect some people to resign soon, as a direct result of the layoffs?


Am I reading this properly?

> As we finish this fiscal year and move into the next, we are focused on investing in our product. As such, we are significantly reducing the size of our go-to-market organization while we do so. Supporting teams and other teams across the organization are impacted as well.

This means they're cutting down on new product introductions, and are focusing on generating profit from their existing/deployed products?


Marketing in 2023 is truly hilarious. I can't decide whether it's better to get ahead of the news media by announcing your own failure, or to just be as good as you can be to the employees that you have to let go, and not write blog post about it. I know it's the current trend to make a big fuss with a tweet and everything about how much we're going to miss you and all. Somehow I don't think it's really helpful for the newly unemployed.

If someone can offer some insight on why we're paying somebody to spend a week thinking about what to call the layoffs ("headcount reduction" lmao), and not continue to pay the actual contributing members of a software team, and especially what the angle is in writing a blog like this instead of quietly taking care of your people, I'm all ears. Is it marketing trying to justify their continued existence? Are we helping the people we're letting go by announcing the flood of (probably experienced) talent into the marketplace? I'm only disparaging this practice a little bit. I genuinely am curious about what the psychologists and market analysts have found about how this is beneficial to the company.


On other news copilot is uneconomic [1] so one wonders where all this will land.

[1] https://www.wsj.com/tech/ai/ais-costly-buildup-could-make-ea...


Is this a canary in the coal mine?

I've been following layoffs for the last few years, and it generally seems like ~10% is normal, with larger companies doing a smaller percentage, while startups often layoff 10% or slightly more.

28% is a scary number.

I'm betting that everyone on this site has visited StackOverflow at least once. If they cannot figure out a way to weather an incoming storm, then this makes me even less hopeful about tech employment in general.

But, then again, SO did get purchased in 2021 by Prosus (https://en.wikipedia.org/wiki/Prosus).

Scary times ahead? Or, just more evidence that private equity ruins everything good? Or, an entirely new story that AI is going to rapidly destroy all businesses?


Context matters.

We are coming out of a ZIRP (zero interest rate policy) period where debt was effectively free and lots of companies borrowed and grew very quickly (some/many/most probably knowing they might eventually need to shrink back down some amount if the financial climate changed).

I have no idea if this was the case with Stack Overflow, do we know how many people they hired in the last 3 years, leading up to this cut?


AI is a red herring, you would be hiring people to work on that not just firing people working on something else.

A common trend in these lay-offs is that I can't for the life of me come up with what those employees would be doing. That combined with a sudden lack of easy money due to rising interests rates and more uncertainty about pretty much everything seems like a simpler explanation for these lay-offs.


StackOverflow have already lost completely to ChatGPT. Hiring people to work on AI would just be throwing away that money.


I wouldn't try to read the tea leaves from a single company


> Or, just more evidence that private equity ruins everything good?

I think it is this. Private equity translates to layoffs regardless of the need for layoffs.


For some OSS projects, Q&A has moved to its issues system. Some recommend to ask in SO, but the number of others that are ok with questions seems to be increasing.

ChatGPT 3.5 started giving me wrong or not so good answers. So I rarely use it these days.


How much did the Stack Overflow corpus contribute to the training of GPT-4?

(I'm thinking contributed directly, not indirectly, from all the copy&pasting answers into Git repos.)



Stackoverflow not having a public whitelabled version was their demise, not having something for a public community that is sellable without having to go through the Area51 provess was their demise. Discourse has been eating their lunch. https://www.discourse.org/customers


Interesting given that Jeff Atwood co-founded SO and Discourse.


They still haven't admitted that LLMs have taken their users and the usage of site is dropping?


They could deny any of the AI bots crawling their content and build their own AI.

The way things are going, there’ll be islands of content and AI trained on it.

Like how Reddit and Twitter closed crawling of data.


"The CEO of Stack Overflow reassures the millions of developers who rely on it to do their jobs that its $1.8 billion sale to an investment firm will only help it grow and 'take things up a notch.'"

https://www.businessinsider.com/stack-overflow-acquisition-p...

Whew! I was really worried that selling out to private equity was going to hurt the company in the relatively-short term.

Seriously, American capitalism is increasingly persuading me to be a socialist.


If SO just invest in their chat feature it could be the Discord of today.


In seeking profit, what comes next? Blue checkmarks, paywall, "free speech"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: