It's not wrong to be skeptical. I do think there's a sort of boiling frog effect, though. Internal company culture never matches public perception, and you're always able to tell yourself a story that the public misunderstands what drives the company. After all, you hang out with your colleagues and you know they aren't bad people. But then some big controversy snaps you out of it and you realize that the company's internal collective fiction is no more accurate than the public perception, and perhaps you are the baddies after all.
hello, i'm the ex-employee in question and can speak authoritatively on the subject of myself :)
i joined FB for the first time as an intern almost a decade ago. i suspect i have a substantively different view of the company and the people who built it—even compared to many other employees, and certainly compared to folks who have not been in the ~room where it happens~ (heh).
even accounting for some amount of insider bias, i think there's still a material discrepancy between how the public viewed certain major FB "scandals"—via the lens of media spin and profitable reporting—and how many folks like myself, who were privy to additional context and private information, viewed them. to some people, the recent discord around hate moderation might seem like just more of the same FB badstuff. not so for me!
even folks at FB who don't work directly on certain products and policies are often immersed in discussion about them (unless they aggressively unsubscribe). these discussions, again, have their bias, but i hope we can concede that it's still a lot of passive brain cycles being spent swimming among these topics. folks develop deeper intuitions for how troubling X or Y publicized issue actually is, relative to all the things (of all flavors) that happen at FB and among its users. there's also a ton more discussion of legitimately positive societal work and how to extend those successes, whereas those rarely make good media narratives.
the good doesn't cancel out the bad—that's never how it works—but you do have to consider both together at every decision point.
in the leaked audio of my internal video post, i say that my long road to the door started in 2019. i still stand by that inflection point, and feel that before the, FB—while clearly on the back foot for some time due to rampant abuses by powerful groups of its product—was on balance moving in a direction i supported.
you may have made a different choice—or you may feel you would make a different choice, though you may not have the full slate of inputs right now to be sure you would, in situ. but, at least for me, i have reasons why the questions around the 2016 election or fake news or data breaches were things which, to me, seemed categorically different from the questions concerning hate moderation.
feel free to DM any earnest questions about why these things don't all congeal into one mass of problem for me (and i suspect other current and former FBers).
I'm still of the opinion that you chose personal gain over doing what you knew to be the right thing. It's something everyone (me included) does many times over. But the scale of wrongdoing by Facebook is exceptional.
Equally as possible is that our intuitions about "the right thing" are wildly different. I think that democracy, societal cohesion, and personal privacy are important and that Facebook has permanently damaged all three.
i think you may be setting up a few false dichotomies here.
> the scale of wrongdoing by Facebook is exceptional
absolutely—but i think you (and certainly, many many people in the world) may be missing that this is in large part due to facebook's scale, period, not any specific wrongdoing-at-scale. facebook is enormous and has enormous impact. enormous quantities of good and bad things happen on facebook and through facebook every day. it's not enough to /just/ point to its "scale of wrongdoing" to say that it's "wrong" to associate with it. every government in the world causes harm at scale. should we embrace true anarchy? are we all culpable for participating in society? i mean, maybe yes to that last question; but it's not a very interesting answer, possibly because it's not a very interesting question.
i think you have to consider things holistically. facebook does harm—that's bad! does it do good at scale, also? how much of the harm is facebook's "responsibility", at least from the perspective of assigning moral culpability? (b/c obviously it's better to treat all fouls as your responsibility, because you control only your own actions.)
consider the 2016 election mess on social media. do we pin the blame squarely on social media here? if so, why? i think social media was caught off-guard. social media started its life as small circles and communities, grew into a media and meme distribution platform, and then somehow became hijacked by powerful entities fueled by state money (including states themselves) as a dezinformatsiya and propaganda side hustle. was it FB's responsibility to predict and prevent that, even when so few others did or could? if so—are those other people and platforms not morally culpable for not making enough noise or action? are the state actors themselves not morally culpable for their atrocious agency?
in late 2016 and early 2017, i read lots of opinions and articles on the NYT about how FB didn't defend against disinformation and propped up negative political content. why did the NYT write so few opinions and articles about how the NYT published story after story of tabloidal but-her-emails drama, or how the NYT gave the trump sitcom team so much free press?
facebook is very big and powerful but it is not beyond exploitation. should we hold it morally culpable for being exploited in these ways? facebook is not only the messenger, but a lot of the times it is, and perhaps we should not be so quick to proverbially shoot it.
> I think that democracy, societal cohesion, and personal privacy are important and that Facebook has permanently damaged all three
dovetailing off the above: are the instruments of the state not causing this same damage? are traditional media conglomerates not causing this same damage? are political entities accepting massive cash injections not causing this same damage? are certain social institutions like megachurches not causing this same damage? are social norms sculpted by late capitalism not causing this same damage?
do you curse your server for bringing you bad food?
this question is rhetorical. the answer is obviously yes, people do that all the time, but like, maybe they shouldn't.
so for me the question is, how much of this damage do i feel that facebook is "immediately" responsible for, versus "secondarily" responsible for. i brought up the 2016 election above. my view is that FB was in large part taken by surprise and exploited in the course of those events.
obviously, i have insider bias here. but much as both-sides-ing every single argument is not actually a "neutral" position, neither is the position of being on the "outside" and not having insider bias. neutrality is a fiction; there's nothing truthier about being on the insider vs. on the outside.
but my position gave me (and other FBers) access to information about motion, decisions, and human actions which informed my evaluation of culpability. much of the "scandals" in FB-the-darling-child-of-the-media's history had a similar feel. facebook had some posture, it largely worked alright and allowed for good or neutral stuff to happen, conditions changed, "small" (but still at-scale) but high-profile harm was incurred due to some bad faith actors, and facebook responded. not interesting to me, someone who watches this process happen literally every day.
employees didn't have trust in FB because they were given literal kool-aid to drink; it's because they had extra information and sometimes it led to obvious conclusions that are utterly non-obvious without that information.
but i think facebook's response to hate moderation has a different texture. it's had years to adapt to the new reality of constant assault from political forces (though i would never, ever expect or want perfection here). but rather than pushing back—as it did in pretty much all past privacy or data breaches—it seems to be adjusting to explicitly allow for and tolerate some of this behavior which i consider "bad".
why i stayed at FB past the moment at which i developed this concrete concern is for entirely selfish reasons. but don't conflate that inflection point with the whole history of FB's narrative, as told by the media. you may feel the whole story has a uniform texture, but i don't, and i suspect other FBers do not, and i suspect it's because there are good reasons to feel that way.