Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook Offers Tools for Those Who Fear a Friend May Be Suicidal (nytimes.com)
59 points by danso on June 14, 2016 | hide | past | favorite | 86 comments


This has a ton of implications. First people 'tagging' profiles as depressive and probably training an algorithm behind the scenes, so now if you behave a certain way you the chances you are depressed are 94%. Second, who owns this valuable health information? Can they sell it to third party's? Insurance or recruiting companies?

Not against the idea but thought provoking the several implications of what this data means.


Just watched a documentary about Foxconn which tries to filter out suicidal employees. Sure Facebook's profiling would be useful for many companies who put a high pressure, and their HR depts. Not sure humanity is progressing, though.


> which tries to filter out suicidal employees

The cynic in me says they don't particularly care about the high suicide rate of assembly line factory workers in China, as long as said factory workers don't off themselves while on company property or at the company dormitory. Because that's bad PR, guys.


> Not sure humanity is progressing, though.

What would constitute humanity progressing? Would evolution count? Would it count if it appeared in the shape of weeding out suicidal people?


Facebook wants to tell insurers and corporations employees and customers who might be a suicide risk.


The Facebook tack is interesting, because it's technically not a service for the suicidal individual; rather, it's a service for their friends who "really want to help, but... just don’t know what to say, what to do or how to help their friends." That's a unique angle, which sets it apart from, say Apple, which has taken significant steps since the first roleout of Siri to enable her to respond appropriately to certain queries (e.g., "I was raped" or "I want to commit suicide).[1,2] The conversation on how we want our electronics and services to behave is really interesting and one that is only going to get more complex as we get better at language and mood processing/modeling/profiling.

1: http://abcnews.go.com/Technology/apples-siri-now-prevent-sui...

2: http://www.telegraph.co.uk/technology/2016/04/04/siri-update...


Perhaps it is time to start a conversation on why suicide is seen as a problem, tout court.

I always thought of suicide as a possibility among others. It’s a normal feeling to be tired of life.

I would have suicided long time ago if suicide wasn’t so complicated and scary, with a huge risk of failure. I don’t want to feel pain, I just want to go. Like in euthanasia.

I checked, and unfortunately not even Switzerland offers the possibility of euthanasia (good death) to healthy people. I really think that’s a society shortcoming, then need to be addressed in the future. What's the problem with suicide at all?


"What's the problem with suicide at all?"

Aside from the horrible impact it has on friends and family (including the risk of it becoming contagious - http://www.npr.org/sections/goatsandsoda/2016/04/21/47484792... ), many people who attempt suicide and fail later decide that they are glad that they didn't go through with it.


And many try it again.


And those might be the same people, that were unlucky enough to have one tiny bad enough moment.


What about all the unlucky moments of misery and/or wanting to not exist but hovering just below the threshold of suicidal? Days? Months? Years? Decades?

I wish society had the balls to tell the depressed they're obligated to stay here and run out the clock, but beyond that we don't really care how miserable they are. That we'll talk about talking about depression, but not actually talk about depression [0]. That's what a shrink is for, go get one so we don't have to deal with you being mopey (I guess it'll be positive for you, too).

[0] http://hotelconcierge.tumblr.com/post/116790700524/we-need-t...


As someone who used to be suicidal, I'm pretty glad I didn't do it.

Sure life is still pointless and kind of meaningless, but I'm having a lot more fun than I would have if I was dead. Unexistence, while easier, doesn't sound nearly as fun.

And besides, how will I ever achieve all those hopes and dreams I had as a 12 year old if I'm dead? I won't. But the longer I stay alive, the more likely it is. "Survive until tomorrow" is, in a sense, the only thing that matters. Or at least the thing that comes the closest to mattering.


Unexistence, while easier, doesn't sound nearly as fun.

We do not actually know that Unexistence would be easier, nor whether or not it would be fun.

I don't intend any disrespect at all. I have attempted suicide and spent plenty of time suicidal. So this is a conclusion I drew in earnestness.

We have no means that I know of to investigate what comes after death, whether or not something survives the physical body, etc. Many people believe in religion and an afterlife or in reincarnation. Others believe nothing survives after the body dies. But, scientifically speaking, we do not actually know.

I am glad you are no longer suicidal.

Peace.


I agree, we don't know what if anything comes after. But we do know what comes right now.

So I figure, why risk it? It could be better, it could be worse, it could be nothing. I have no evidence either way so there's nothing to base the experiment on. And I know that eventually I will find out no matter what I do.

So there really isn't anything to gain from suicide, imho. Better to stay alive as long as possible and do shit om this side.

I also came to this conclusion after much thought.


Yeah I've somehow incorporated depression into my life. Suicide is now always a choice, I have a few plans if things go really, really bad.

Things are also somewhat more interesting since if I'm anxious about something, I remember that hey, I am dead, whether its 50 years from now or tomorrow (I think Bushido had something similar?).

Nothing really matters so stop worrying and try to enjoy the grind...


From a secular, philosophical point-of-view, part of it is the notion of informed consent. We, as a society, have decided that certain serious decisions should only be made by people we judge as possessing all their mental faculties and be given a neutral, pressure-free environment to formulate their point-of-view.

Of course, this is subjective, and vulnerable to the no-true-Scotsman fallacy: "if you have decided to commit suicide you must have not possessed all your mental faculties or felt otherwise pressured towards that outcome." Hence find it hard to accept that their choice was made in earnest. (edit: quotes)


I don't think suicide is inherently a much worse cause of death than many others, but it is often associated with a mental illness, outside of which the sufferer would not choose to die. Althoigh the illness manifests as free will, it is an illness nonetheless that might be treated. I don't mean to say that it is impossible for a person of sound mind to choose not to live any longer. However, given how often treatment results in improved will to live and improved quality of life, I think it would be irresponsible to assume a healthy mind without challenge and go to euthanasia as a first resort standard of care. We certainly wouldn't do that for any other presentation of symptoms.


I'm not saying that suicide isn't associated with mental health, but when I read this I got a vague feeling of a tautology.

If we define the components of mental illness as being suicidal, obviously that would lead you to classify all suicidals as mentally ill.

For a long time being homosexual was seen as a mental illness and those (probably most) administering treatments saw themselves as helping them and the patient would have a higher quality of life after they were cured. The doctors thought the patient was harming themselves and their family by participating in lewd activities.

Not trying to say one thing or the other about suicide. I just think it is interesting what we consider a mental illness and why and why its dangerous to go to the "mental illness" (I hate this phrasing) reasoning right away.


Some people who are suicidal are not mentally ill. Sometimes they lose perspective of their life, and this may continue for many months/years. They feel that the pain they are in now, is much worse than death. They run out of answers for "why they must live".


Yeah, ritual suicide for example was not carried out by depressed people.

It was a cultural phenomenon, not mental illness.


Good question. Scott, a psychiatrist, explains his reasons here: http://slatestarcodex.com/2013/04/25/in-defense-of-psych-tre....


Many, probably most, who consider suicide are not well. They may be in good physical health but poor mental health. When we have a physical ailment we (the patient) can generally understand the path to recovery, hence we don't kill ourselves when we break an arm. With mental health not only is it difficult to understand when you are in good mental condition but it is almost impossible to see the path to recovery when you are sick. In other words wanting to end your life may be a symptom of a short term illness (say someone with depression who will be better in a few months). Differentiating between people who are of completely sound mind and want to end their life and people who are suffering need treatment is not going to be full proof so it couldn't be offered in the same way as euthanasia (similar to how most countries don't practice the death penalty - it's nearly impossible to be 100% sure of something and you need to be when it comes to doing something as permanent as ending a life).


The biggest problem is the cost to other people, especially if the person committing suicide has children. You have a point though.


It's seen as a problem because most people who attempt suicide regret it or change their mind later.


Because a lot of people attempting suicide and failing often don't try any more and lead "normal" lives, see the study on Golden Gate jumpers who survived for example ("the moment I jumped I regretted it").


Not advocating for suicide but playing devils advocate...

I would imagine that bank robbers the moment they get caught instantly regret it. But that doesn't prevent the ones that got away from completing their goal and not regretting it.

People who fail suicide often have HUGE amounts of personal anguish because of the police, hospital, family, etc. all judging them from the suicide attempt.

I think its easy to confuse regret of the action over all and regret that you tried the action and failed. But I imagine there are some who genuinely regret their attempts.


At the very least, suicide causes a lot of grief and pain to others, generates work in dealing with the deceased, their possessions, disruptions caused, etc., and wastes a valuable resource that took years of time and effort to develop.


Couldn't you say the same about any death? Regardless of reason?


Yes, and the general consensus towards death is against. Parents point is that just like any other kind of death, suicidal death is a problem.


Many questions are never asked in these articles.

Is suicide bad? Is it immoral? Is it a crime? Should it not be permitted? If it is wrong, immoral, and a crime, should people who attempt it be punished?


Well, from what I recall from biomedical ethics, the majority of people who attempt suicide and fail do not attempt suicide again. This would suggest that for most people suicide is merely a phase they can pass through.

Regardless, these questions are asked all of the time. A news article about a new feature on a social media site isn't an appropriate forum for discussing such a ridiculously complex topic as the ethics of suicide. The only way your comment could be less appropriate is if you were advocating for a discussion on the ethics of murder on an article about a murder suicide.

Edit: I suggest checking out the article on suicide on the Stanford Encyclopedia of Philosophy. The Encyclopedia as a whole is a great resource, and this article is no different:

http://plato.stanford.edu/entries/suicide/


> This would suggest that for most people suicide is merely a phase they can pass through.

Strictly speaking, I don't think that necessarily means that these people are better off surviving. One could imagine that they are in pain that is temporary, but severe enough that they are better off dieing immediately, even taking into account that they are missing out on the life that would otherwise have come after the pain.

By making them stay alive, we are perhaps forcing them into the worse option.


Oh, I don't disagree. I personally believe that suicide can be a rational decision. However, I believe that it is not a rational decision with much greater frequency.


I'm in the camp that suicide is a human right, and I get plenty of heat for having that opinion. People seem to feel entitled to the existence of other people, which I can't seem to understand.


I'm sure the majority of the heat you receive is because your stance has absolutely no nuance to it. For instance, the majority of people who attempt suicide and live never attempt suicide again.

Similarly, suicide mostly seems to be impulsive -- if access to a certain method of attempting suicide is reduced, the corresponding decrease in suicide attempts via that method is not reflected in a corresponding rise in suicide attempts via other methods.

Lastly, suicide rarely occurs in the depths of depression. People who attempt to take their own life are much more likely to be heading into or out of a depressive episode.

While these aren't the only reasons that we should attempt to prevent suicides, I feel they provide sufficient counter-evidence to your argument. I will say that I am in favor of physician-assisted suicide, and there is definitely a contingent of people who are hell-bent on killing themselves, but for the most part most people who are suicidal will later be thankful to be alive.

If you'd like to know more, I suggest reading the article on suicide on the Stanford Encyclopedia of Philosophy:

http://plato.stanford.edu/entries/suicide/

Edit: Section 3.4 (Libertarian Views and the Right to Suicide) responds to your criticism very directly[1]

If you find that interesting (of if you still find yourself unable to sympathize with those who believe they should attempt to prevent suicides), I'd suggest taking a course on either biomedical ethics, or ethics in general. These topics have been debated for centuries at this point, and there are a number of very compelling arguments on both sides.

[1] This position is open to at least two objections. First, it does not seem to follow from having a right to life that a person has a right to death, i.e., a right to take her own life. Because others are morally prohibited from killing me, it does not follow that anyone else, including myself, is permitted to kill me. This conclusion is made stronger if the right to life is inalienable, since in order for me to kill myself, I must first renounce my inalienable right to life, which I cannot do (Feinberg 1978). It is at least possible that no one has the right to determine the circumstances of a person's death! Furthermore, as with the property-based argument, the right to self-determination is presumably circumscribed by the possibility of harm to others.


Nothing in that linked article is new to me. The religious and deontological arguments are uncompelling.

No one chose to exist, but I think everyone is entitled to not exist, regardless of their reasons. Even if someone is thankful that they were prevented from taking their own life, that's essentially an argument of the form "the ends justifies the means."

As Camus said, "There is but one truly serious philosophical problem and that is suicide." I think it's up to everyone to decide that for themselves. Preventing people from doing such is unethical, imho.


>Even if someone is thankful that they were prevented from taking their own life, that's essentially an argument of the form "the ends justifies the means."

You have completely misrepresented my argument. Nowhere did I present anything that could even remotely be construed as a consequentialist argument.

I provided that point as a direct rebuttal to your point that if we look at agent-focused moral systems, and ignore agent-neutral moral systems, preventing people from committing suicide doesn't make sense.

> I think it's up to everyone to decide that for themselves. Preventing people from doing such is unethical, imho.

Your opinion is indefensible and insane. What you have implied constitutes prevention is absurdly broad. For instance, from a consequentialist standpoint, calling a friend you fear may be suicidal and asking them to hang out would constitute prevention at least some of the time. Ignoring that issue, the evidence I've presented effectively argues against the idea that most people who attempt suicide wish to kill themselves. Most suicidal individuals are impulsively responding to what the French might call L'appel du vide, or the call of the void. If they were rational, they would not attempt suicide in the first place.

I cannot imagine you actually understand the points Camus is trying to make if this is the conclusion you have come to. Please seek out further education on the subject. Your knowledge of ethics and philosophy is appalling, and ultimately harmful.


"Prevention" implies force, and that's all I meant by it. I take no issue with talking to those whom have expressed the desire to commit suicide. My primary issue is when it escalates to calls to police/medical professionals and force of law is brought to bear.

I'm sorry you find my ethics so abhorrent, though I often feel that way about the ethics of others.


> that's essentially an argument of the form "the ends justifies the means."

What else would justify the means? There's nothing wrong with that form of argument.

There are arguments of that form that are wrong, but not because of the form, rather, because either the ends don't justify the means, or because the means produce additional unaccounted costs which offset the value they have in producing the ends. Or, because there are less-costly means to the ends, or because the means don't actually produce the ends used to justify them.


The means in such arguments are implied to be unethical or somehow violate the rights of someone. When that's the case, the proposed end result is used to justify the otherwise unethical action.

I've never seen a valid argument of that form and I use this particular phrase as a heuristic for validating arguments. If you have examples of arguments that best take that form, I'd be happy to entertain them.


That's just an a priori assertion that certain means are categorically unjustifiable (by anything, ends or otherwise) because they are inherently unethical, which is irrelevant to the form of the argument.

If you want to make the point that certain means are categorically unacceptable, make that argument (which will very quickly, I suspect, get to a root moral axiom on which no debate is possible, simply agreement or disagreement) rather than raising irrelevancies about the form of the argument.


Fair reply, though I think I've stated my axiom: people have a right to nonexistence. As a corollary: people don't have a right to the existence of others.


Replying only to show agreement and because the other guy .. well.. you know. I wish there were upvotes.


Would you attempt to prevent someone about to engage in ritual suicide from doing so?

That's not a mental issue, it's cultural.

Not all suicides are the result of mental anguish or impulsive decisions.


You can believe that suicide is a human right, and at the same time believe that it is almost always a horrible choice. You can help people see why it's a horrible choice while still maintaining that it's a human right.

Whether or not it's a right can tend to be something of an abstract discussion. But when someone commits suicide, it's not abstract. They die.

And it's not abstract to me, because I've helped prevent the suicides of two friends, and failed to prevent that of a neighbor.


Should we prevent people from making horrible choices? We certainly don't use the force of law to prevent someone from marrying the wrong person even though they may be making a horrible choice.

I think there are conditions in which someone can live that is worse than death, and the problem is finding where you draw that line for yourself. For instance, I would prefer death to slavery or permanent incarceration.


Well, if my friend (or child) wants to marry the wrong person, I'll do my best to change their mind. Between "use the force of law" and "do nothing", there is a lot of room...


I'm ok with anything short of using force-of-law. My concern with the facebook feature is it provides a pretext for applying force of law.


No one is suggesting using the force of law to prevent people from committing suicide. This is just a helpful tool. It's not forcing anyone to do (or not do) anything.


Suicide is currently against the law and medical professionals regularly use force of law to prevent someone whom they think is suicidal from going through with it.

This feature provides the pretext for such force.


By the time you are in the care of medical professionals they are responsible for your life. A physically able and rational person will find little impeding their attempt to kill themselves. A person who is mentally unsound and in medical care is not in a state of mind to make a rational decision about ending their life. It's not like it can be undone. The physically impaired (injured, sick), but mentally sound, you have a reasonable case for medical professionals assisting or at least not preventing suicide. But you seem to ignore all cases other than if someone wants to do, let them, even if the decision is spur of the moment because they just experienced something horrible and haven't given time to process it fully. Brilliant.


> But you seem to ignore all cases other than if someone wants to do, let them, even if the decision is spur of the moment because they just experienced something horrible and haven't given time to process it fully

I generally believe that government shouldn't protect people from themselves. Again, I'm only arguing against using force of law to prevent someone from taking their own life, not talking someone down from the ledge.


Suicide is definitly bad by all standards set up by the living: It automatically revokes all insurance contracts including life insurance and it's heavily frowned upon in most religions and in most families.

I find it gruesome. Revoking the right for someone to end their life is torture for some. It's not without reason that's what happens with prison: They keep you forcefully alive. It's also what happens when you're medically assisted and you can't go for euthanasia. It also happens when you're deep in the failure of life: The living will check on you so you don't commit suicide. There are anti-suicide barriers everywhere. There's pretty much no place to commit suicide anymore. And now facebook.

Our civilization relies on removing the right of people from resigning from life. We want them to produce.


> Suicide is definitly bad by all standards set up by the living.

Not true. many societies have had the concept of ritual suicide as a matter of honour.


Albert Camus added you as a friend.


Heisenberg simultaneously did and didn't add you as a friend.


Questions like this are appropriate when discussing voluntary euthanasia for terminal medical conditions (which I am in favor of, for the record). They are not at all relevant in cases like this; where Facebook is trying to aid detection and intervention in cases of mental illness when the suicidal person is (literally) not thinking properly.

Nowhere in this discussion is there an implication that suicide is wrong, immoral, or a crime, or that the people should be punished; I don't understand why you're even bringing that up.


I'm not insinuating that this is what's happening, but, forcing people to stay alive so they can keep buying stuff may be the ultimate capitalist dystopia.


We often hear that social media can make people more depressed. It's worth seeing if that effect can not only be cancelled out but reversed.

With this in mind, it would be nice if they detected those who might be depressed and tailored the news feed to try to make people happier.


Ah yes. Keep them in a perfectly balanced, harmonious state. /s


Ads for Zoloft?


I find this incredibly disturbing.

Facebook's entire business model is built around manipulation and control. They do psychological experiments on their users and skew their news feeds to influence their political opinions.

And now Facebook wants me to tag my emotionally vulnerable friends, possibly for further abuse, surveillance and manipulation? For profit? For the good of the state? How much will insurance companies and my future employers need to pay Facebook for the improved psychological profile they're building?

No. If I see anything on Facebook that makes it seem as if someone I know is depressed or needs help, the last thing I'm going to do is make Facebook aware of it.


Mildly interesting observation: the suicide prevention team is 100% female.


The photo shows four women, but the article states that "Facebook has a team of more than a dozen engineers and researchers dedicated to the project."


I think this sort of thing is dangerous because it's (probably) not data backed and you're really playing with dynamite.

Do we know if this sort of message causes a person to be more or less likely to take their lives? How can you A-B test that?

Does anyone else think your phone telling you it thinks 'you aren't being yourself anymore' could set someone over the edge?


> Do we know if this sort of message causes a person to be more or less likely to take their lives? How can you A-B test that?

Actually... you probably could train a model to detect suicidal posts. I had a friend who committed suicide a few months ago, and I didn't see it coming. But looking back, there were signs in his facebook posts (in line with this article's example of thanking everyone), but our conversations were normal. It is at the very least a wishful line of thinking for me that ML can recognize this type of pattern.


To reply to your middle paragraph, you'd probably want a longitudinal cohort study rather than an A/B test. A well-designed one can provide pretty conclusive indications in situations where A/B testing (or, testing with a double blind control as the epi community might call it) isn't possible.


Twitter is also looking at suicide prevention.

Machine classification and analysis of suicide-related communication on Twitter http://orca.cf.ac.uk/76188/

Analysing the connectivity and communication of suicidal users on twitter http://www.sciencedirect.com/science/article/pii/S0140366415...

I think they had a talk with Prof Louis Appleby (who pretty much leads suicide prevention in UK): https://twitter.com/ProfLAppleby/status/740957809924308992


Is there any way to see exactly what this system does, aside from faking a suicidal episode on my timeline and getting friends to report it?

I've heard that early versions of this system basically inserted barriers between suicidal people---who are already overwhelmed---and their support networks.

Considering Facebook's poor track record when it comes to handling vulnerable populations (nym policy, location support in Messenger, etc.), I'd strongly discourage anyone from using this reporting system without a complete understanding of what it actually does.

Heed should also be paid to the effect of having more than one of these systems being triggered at the same time. The last thing I need when I'm feeling overwhelmed is a dozen apps on my phone suddenly changing their behavior.


I have mixed feelings about this, but I think it could be positive depending on how data related to a user's suicidal state is protected.

I recently started conducting studies dealing with technology and psychological interventions and had to find ethical (privacy protecting) mechanisms for responding to participants who express suicidal intentions or show several indicators suggesting depression. We ultimately settled on an automated mechanism that suggests resources to the participant directly in cases indicating depression and human intervention on cases expressing suicidal intentions. It isn't perfect but seemed to be the best balance between privacy and helpfulness.


I understand other people's skepticism, but I have a Facebook friend from high school whose behavior has been extremely volatile. He was always an outcast, but lately he has been reaching out to random people from our grade and leaving incoherent comments. He would make up stories that clearly never happened, and sometimes would become extremely sympathetic, thanking them for "all they tried to do".

I have been watching this strange story from the sidelines, but I know other people have shown concern as well. If this research can help people like him, I think it would be well worthwhile.


I can't think of many worse outlets for this type of concern, or forums to deal with it. Social media is very good at connecting people initially, but it's piss poor at forging much deeper.


What if Facebook developed detection of drug users? Good? Bad?


They probably have it


Somewhat trusting person than I am, I can see this being an initiative borne out of genuine sympathy without sinister motive...but I can see a few potential pitfalls that, depending on how they're handled, could lead to unintended consequences.

One of the more obvious obstacles will just be noise. The ideal situation is someone who posts a clearly suicidal note late at night and a single friend sees it in time to act...this is the kind of signal that a huge network like FB can uniquely catch because of its size.

But what about all the more ambiguous situations? A friend with hundreds of FB friends has gone through a horrible breakup and other real life failures, and he starts posting notes that are potentially signs of suicidal thought...or, at least a dozen of his hundreds of friends think so, and they all flag it as suicidal. What should FB do then? At least a good number of these incidents will be false positives and people will start complaining about FB's human-backed algorithm for discerning potentially suicidal users.

The imminent danger is that someone will post notes that are suicidal in retrospect, people will flag it, and the user will commit suicide anyway. It won't be FB's fault...after all, AFAIK, they're just offering to send a friendly help screen to the user, not call the cops. But that won't be how the media or the deceased user's friends will see it, and the revelation that FB's powerful network has a human/algorithm hybrid element will be controversial in the same way that the trending news kerfluffle was. Perhaps there will be demands that FB should have done more, and that cops should be involved in the decision making...

But the seemingly inevitable decision would seem to be: why even bother waiting for humans to flag suicidal behavior? For a user who has a history of regular activity on the service, I'm sure there's a general profile they fit in terms of sentiment of messages and reduced engagement on the service, just like FB is reportedly able to tell how soon given couple might break up. Here's no reason why FB couldn't augment a profile with a hidden "depression" index to help the suicide response team triage reports...but the existence of such a calculation will undoubtedly be controversial.

And then there's the question of: if FB can calculate such a depression index, why wait for it to reach a high level before taking action? And, why should action be limited to sending an explicit anti-suicide message? Shouldn't FB be obligated to populate the user's news feed with happier messages, more reminders that they have a family, fewer reminders that their former lovers have found new lovers, etc?

And of course, for more controversy, we could always replace "suicide" with other kinds of unwanted behavior.


And of course, for more controversy, we could always replace "suicide" with other kinds of unwanted behavior.

I can't shake the feeling that social networks are sleepwalking towards the very primitive versions of whatever the human analogue of a Skinner box is.

I know gamification is a thing. Along with the other types of psychological manipulation in games. But this feels different to me.

I get that suicide prevention is a prima facie positive, but I wonder what Facebook's ethics board has to say about this type of behavior in general.

I guess in 20 years it will probably be just like advertising. We know it's inherently manipulative, but try to ignore what we know to be it's goals. And lie to ourselves and eachother about the extent to which it can alter our behavior.


when I try and report a post I don't see that option. Where is it? I just see "report spam, report not appropriate, etc."


[flagged]


> Living another miserable 30+ years is a worse way to go than a gunshot.

I have a lot to say about this, as I am a phone volunteer at a crisis center where I routinely have to address all manner of issues, suicide being one of the most serious of them.

I'm limited by time right now, so what I will say is that, based on what I've seen fielding calls, suicidal people are largely suicidal because they are seeking relief from a feeling (or lack thereof). In large part, having someone to whom they can talk and trust is paramount. Just listening to someone actively (paraphrasing their feelings) is a form of relief in and of itself.

When I'm on the phones, I don't try to take away the option for suicide - they are feeling suicidal after all, I am not going to deny what it is they are feeling (in fact, I do the opposite and will openly talk about that suicidal feeling) - but I also sit there and just exist for them. It doesn't always help, but by and large I've found that just being there can provide immense relief. (I should add that I do more than just listen, but active listening is the number one component to what I do.)

I think that's a far-cry better than permanently removing that pain. Suicide is an option, but it isn't the only option.


Trying to cheer people up and not to kill themselves doesn't seem to have much downside. They can still top themselves if they really want to.


Why is this viewpoint SO unpopular on HN? What, precisely, is wrong with a miserable person wanting to end his or her misery? Why do we lionize personal choice in virtually every domain, but collectively choose not to respect the wishes of those who hate their lives?


I suppose nothing. But what's wrong with people that care about a person helping them overcome a (usually) temporary situation and moving on to have a better and full life?

We certainly shouldn't, as prodmerc seems to be doing, say, "Oh, they want to die. Good for them." and walk away. Personally, I'm glad you and prodmerc weren't my friends 10 years ago.


Suicide is often not a very rational or well considered option, except in regards to medical assisted suicide. Suicide is, as they say, a permanent solution to a temporary problem. But it's a lot more complicated than that. I think the best argument for suicide intervention are the many stories from folks who came the closest to succeeding in their suicide attempts, failed, and then changed their minds and decided against suicide. That speaks to the fact that the reasoning going into a suicide attempt is often not on solid ground, and that there are often a lot of ways to remediate "future decades of misery" that are not ending a life.

Speaking personally, I know a lot of people who have had thoughts of committing suicide or have made an unsuccessful attempt at doing so and in every case it was an erroneous decision, in every case their future was full of a lot more than years of misery.


Why don't we install suicide booths in coffee shops? Make it convenient and cheap? Sit down, insert $20 (or wave your smartphone NFC), get a shot of toxic gas and automatic call to morgue to pick up your body? Uber can contract deliveries.


The apparatus that thinks suicide is an option (the sufferer's brain) is any many cases not the best apparatus to make that decision. There are some rational reasons for suicide, such as severe chronic pain, but most of the time they are not.


what's next? Flag as Muslim Extremist?


I think that already exists, just not publicly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: