> But I do tend to peruse extremist circles on both sides to understand the radicalism a little better, and generally think that keeping these folks relegated to unseen areas is net-negative.
Why net-negative? If it is accessible online (and it must be, otherwise how do they communicate?) then one can still peruse, have a pulse on it.
Yeah, this is basically what I meant. Even Discords on TG groups that are "public" but still require your identity to attached and therefore "doxxable" in some respect are still off the radar in the sense that they're not included in research datasets.
I sometimes wonder if online and violently oriented extremism is indeed on the rise, or even completely mainstream at this point. Or if it's actually a very small, isolated problem that gets amplified and magnified through the clickbait media cycle. Or anywhere in between (like e.g. the common claim that these ideas are laundered into the mainstream, potentially with some amount of watering down, dogwhistling, or code switching that obfuscates the source).
At this point, I really think very few people, if anyone, even know the order of magnitude of the problem. Certainly, there's been some academic studies done on the topic, but most of them focus on fully public content on e.g. Twitter or TikTok, as opposed to the "dark circles" like KF, TG groups, and Discords.
There are also technically public boards that are somehow blocklisted on more mainstream social media that exist in a sort of grey area. I probably can't post any of them here without the risk of getting this comment moderated, but many of them were formed in the wake of exoduses from banned subreddits, and then popularized by advertising on those subreddits in the small window between getting quarantined or admin-moderated and getting banned.
Idk, this comment wasn't very cohesive, even after some edits, but yes, there's a big difference between a public subreddit and a semi-public Discord server in terms of monitoring certain kinds of speech. And I think most people here at least somewhat buy into the legitimacy of the Streisand Effect, and I think a lot of this is just that but with nastier people.
Extremist types do have a tradeoff between security and visibility because they need to grow the size and/or quality of their network or watch it shrink due to boredom or demotivation. conversely people who monitor extremism want o limit its growth, but not so aggressively that extremists significantly up their game and monitors have to start researching infrastructure from scratch.
That's exactly a problem with theses sites; they are horrible and yet can slowly recruit people because companies still provide them with services that allow them to stay public.
Such groups can splinter endlessly into unidentifiable new subgroups. How do you stop those? What is your imagined end-game here — making freedom of association default-deny?
Why net-negative? If it is accessible online (and it must be, otherwise how do they communicate?) then one can still peruse, have a pulse on it.