SamaritansRadar Twitter monitoring system thoughts
Moderator: embleton
SamaritansRadar Twitter monitoring system thoughts
The samaritans radar Twitter application may have been designed with the best will in the world, but do people really wish their Twitter account monitored 24/7 for key phrases and emails sent out to all their followers that use radar that think they are super supporters of those with mental health issues, especially in such a touchy subject as suicidial thoughts.
Twitter is rather a passive public social network and really not an excessively active environment for interaction, some wish nobody to invade their privacy with intrusive tweets, followers, or direct messages, and just generally comment passively between themselves and those understanding the same, at times when nobody that they know that don't understand these thoughts are monitoring, usually at an unusual time of day or night using hashtags, even though they maybe followers using radar how can they support anyone retrospectively or when privacy is necessary.
The fact is that some individuals may have suicidial thoughts but then delete their tweets when they've thought about the subject further or for other reasons like privacy, with an alert system that alerts their followers this will no longer be possible, the system will permanently keep a record of such activity and this may cause stigma and even discrimination, does anyone wish that scenery? They've basically done a tweet and then deleted such, but even though no data exists from the link it still sends a radar alert, once done it cannot be undone the emails have already been sent.
Is such a system really going to help those suffering from suicidal thoughts, generally if you've got over a bad period of suicidal thoughts you don't wish to be reminded of it for weeks on end, by even the best people in the world that wish to support those suffering it will not work, but will cause excessive suffering and that suffering will be extended with this poorly thought out radar system.
What happens if a Twitter user using the radar application discovers that an individual that follows and is followed by themselves and one of them successively completes suicide before reading the radar alert system email, won't that also cause trauma to the survivor. How are they now going to be supported themselves, as the trauma caused by those receiving the email after the event will have them thinking they've failed to support their own radar support network.
The system now has an opt-out facility, but unfortunately you have to follow the samaritans to send a direct message to them to opt-out of their radar system, which is completely wrong, as they're now collecting information about Twitter accounts, and this rightly I think should come under the data protection act for collecting personal information. The radar system should be opt-in not opt-out in my opinion, as then those who wish to use its facilities can choose to do so in that case.
The automatic data analysis on the Twitter public network is indeed excessive in my opinion, and in the majority is done by marketing companies, which is also the case for the radar system, It was designed and built by digital jam a marketing company. I've also done my own analysis of how my own forum receives hits when I tweet, basically the data analysis follows the link you tweet to your blog or website in a matter of a second, real people act longer than this, so it's easy to see the difference. I've no objections, but I'd prefer to know the signature of these agents, so I can filter and adjust the hit rate accordingly on my own site. I've identified at least 10 marketing agents on Twitter following all tweets, so radar isn't the only offender.
The data protection act would apply to the radar system because it is processing sensitive personal data, so permission needs to be obtained by the samaritans to process said information from each user, so it must be opt-in to comply with the law. Those using the radar system are agents of the samaritans, that is data controllers agents as they are storing sensitive personal emails that is automatically filtered and that is defined in the data protection act as requiring consent, so you are likely to be sued when acting inappropriately on such filtered data as an email agent data controller, be very careful if you signup to the radar system in that case.
I personally have had suicidial thoughts in the past and even acted on them, as I suffer from bipolar affective disorder, so do have direct knowledge of this very topic. Those who follow me should not act on a radar alert, I have an excellent support network already in place, but may occasionally tweet about my feelings of suicidality.
Twitter is rather a passive public social network and really not an excessively active environment for interaction, some wish nobody to invade their privacy with intrusive tweets, followers, or direct messages, and just generally comment passively between themselves and those understanding the same, at times when nobody that they know that don't understand these thoughts are monitoring, usually at an unusual time of day or night using hashtags, even though they maybe followers using radar how can they support anyone retrospectively or when privacy is necessary.
The fact is that some individuals may have suicidial thoughts but then delete their tweets when they've thought about the subject further or for other reasons like privacy, with an alert system that alerts their followers this will no longer be possible, the system will permanently keep a record of such activity and this may cause stigma and even discrimination, does anyone wish that scenery? They've basically done a tweet and then deleted such, but even though no data exists from the link it still sends a radar alert, once done it cannot be undone the emails have already been sent.
Is such a system really going to help those suffering from suicidal thoughts, generally if you've got over a bad period of suicidal thoughts you don't wish to be reminded of it for weeks on end, by even the best people in the world that wish to support those suffering it will not work, but will cause excessive suffering and that suffering will be extended with this poorly thought out radar system.
What happens if a Twitter user using the radar application discovers that an individual that follows and is followed by themselves and one of them successively completes suicide before reading the radar alert system email, won't that also cause trauma to the survivor. How are they now going to be supported themselves, as the trauma caused by those receiving the email after the event will have them thinking they've failed to support their own radar support network.
The system now has an opt-out facility, but unfortunately you have to follow the samaritans to send a direct message to them to opt-out of their radar system, which is completely wrong, as they're now collecting information about Twitter accounts, and this rightly I think should come under the data protection act for collecting personal information. The radar system should be opt-in not opt-out in my opinion, as then those who wish to use its facilities can choose to do so in that case.
The automatic data analysis on the Twitter public network is indeed excessive in my opinion, and in the majority is done by marketing companies, which is also the case for the radar system, It was designed and built by digital jam a marketing company. I've also done my own analysis of how my own forum receives hits when I tweet, basically the data analysis follows the link you tweet to your blog or website in a matter of a second, real people act longer than this, so it's easy to see the difference. I've no objections, but I'd prefer to know the signature of these agents, so I can filter and adjust the hit rate accordingly on my own site. I've identified at least 10 marketing agents on Twitter following all tweets, so radar isn't the only offender.
The data protection act would apply to the radar system because it is processing sensitive personal data, so permission needs to be obtained by the samaritans to process said information from each user, so it must be opt-in to comply with the law. Those using the radar system are agents of the samaritans, that is data controllers agents as they are storing sensitive personal emails that is automatically filtered and that is defined in the data protection act as requiring consent, so you are likely to be sued when acting inappropriately on such filtered data as an email agent data controller, be very careful if you signup to the radar system in that case.
I personally have had suicidial thoughts in the past and even acted on them, as I suffer from bipolar affective disorder, so do have direct knowledge of this very topic. Those who follow me should not act on a radar alert, I have an excellent support network already in place, but may occasionally tweet about my feelings of suicidality.
Re: SamaritansRadar Twitter monitoring system thoughts
A point that has not been raised about the Radar system its not designed for those with severe mental health issues in mind, they are confused when in suicidal states, they tend to use language completely differently, and I question whether this can be analysised by artificial intelligence linguistics at all, never mind lexicon analysis only. This group of the mental health subsection of our society complete suicide at the highest rate, and above that of any other group; they usually suffer from bipolar affective disorder, schizoaffective disorder and schizophrenia and therefore think completely outside the box. Only a real life friend or professional could possibly understand when they're in crisis and likely to suffer suicidal ideologies that may lead to suicide.
Re: SamaritansRadar Twitter monitoring system thoughts
And of the 300 individuals that took part in the study, did any of them attempt or complete suicide, as that's the only way you'd be able to validate positive hits in the research? And this measure should be analysised against suicide rates now on Twitter and those in the future that complete suicide, if the rates go up then somebody should be held responsible for the overall Radar system malfunction in increasing suicide rates.
Re: SamaritansRadar Twitter monitoring system thoughts
From experience and research it has been highlighted that those individuals in our society in real life that talk about suicide usually complete said when they are talking happily after expressing such thoughts, so the Radar system and individuals should understand this fact. This research has been done by mental health professionals on why it isn't such a good idea to admit individuals to psychiatric hospitals, as their more likely to complete suicide coming out of a psychiatric hospital than if not admitted in the first place.
http://toddkashdan.com/articles/suicide.pdf
http://www.medicine.ox.ac.uk/bandolier/ ... chsui.html
http://toddkashdan.com/articles/suicide.pdf
http://www.medicine.ox.ac.uk/bandolier/ ... chsui.html
Re: SamaritansRadar Twitter monitoring system thoughts
https://www.change.org/p/twitter-inc-sh ... tans-radar petition to shut down Radar on Twitter, please sign it.
Petition has been suspended, Samaritans have suspended the application (7th November 2014).
Petition has been suspended, Samaritans have suspended the application (7th November 2014).
Re: SamaritansRadar Twitter monitoring system thoughts
Professional mental health workers advise and help those with severe mental illnesses to claim disability benefits by just validating that you have a diagnoses. How can the Radar system help in this respect, most thinking they can help are not even aware of the appropriate acts never mind reading them to completion and understanding such, and that would be good advise but it needs to be done in person really not a digital entity, as most severe mental health sufferers aren't even using technology. Those that do use technology are paranoid of being monitored, but a good thing it could do is advise and signpost to good websites offering advise, was this even considered, probably not.
Instead it signposts Twitter users to those who think they can offer advise by chatting between themselves about severe depression, itstead of signposting to organisations that offer support in claiming disability benefits or organisations that really do help, and sadly the Radar system doesn't even do that, who designed this an idiot who understands nothing about benefits, mental disabilities, mental health act section 117 for free aftercare, local organisations among other things that can help those in despair.
Samaritans Radar has no empathy or compassion for those suffering from mental distress, as it evaluates mental state and informs third parties without the individuals content. It doesn't inform the person concerned at all when it takes such action, that's a one way conversation, which is completely disrespectful, causes stigma and discrimination. I'm sure a lawyer would have something to say about such and its implications to the persons character representation by a machine, that a human designed and they should be held responsible for such decisions personally.
Instead it signposts Twitter users to those who think they can offer advise by chatting between themselves about severe depression, itstead of signposting to organisations that offer support in claiming disability benefits or organisations that really do help, and sadly the Radar system doesn't even do that, who designed this an idiot who understands nothing about benefits, mental disabilities, mental health act section 117 for free aftercare, local organisations among other things that can help those in despair.
Samaritans Radar has no empathy or compassion for those suffering from mental distress, as it evaluates mental state and informs third parties without the individuals content. It doesn't inform the person concerned at all when it takes such action, that's a one way conversation, which is completely disrespectful, causes stigma and discrimination. I'm sure a lawyer would have something to say about such and its implications to the persons character representation by a machine, that a human designed and they should be held responsible for such decisions personally.
Re: SamaritansRadar Twitter monitoring system thoughts
Samaritans Radar boast of the hashtag #SamaritansRadar trending and assume that's good positive feedback concerning their system, anything could be further from the truth. The majority of those that I've read on this hashtag are completely negative, even NodeXL confirms that fact. Why are the samaritans not listening to people's distress and suffering. I've read almost all the blogs on the subject, and the major majority are written by mental health bloggers that completely disagree with the Radar monitor and proactive intervention by misinformed users on their feed, and in some cases have soft block individual accounts because they're using Radar inappropriately and without consent. And a great deal of individuals have stated categorically that if any of their followers are using the Radar system unfollow me immediately.
Re: SamaritansRadar Twitter monitoring system thoughts
Universal declaration of human rights art. 12
"No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks."
Disability Discrimination Act in respect of making reasonable adjustments because of outburst of aggressive language or statements concerning suicidality because of depression or mood swings, a machine has assessed the mental state of an individual, and hasn't made adjustments, when that individual may already be under the DDA because of a mental health disability, and I am. I require reasonable adjustments to your service and the algorithm to account for my disability, and Radar app not being triggered accordingly or I will consider it harassment, which is why I've opted-out of your service and I'm entitled to opt-in when adjustments have been made to said service.
"In each case the conduct must have the purpose or effect of violating a person’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for them." << you Radar system does such Samaritans.
I have mental health section 117 free aftercare, and I'll be requesting that I'm being isolated on the Internet by your system, therefore I'll require funding to fight you in court.
"No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks."
Disability Discrimination Act in respect of making reasonable adjustments because of outburst of aggressive language or statements concerning suicidality because of depression or mood swings, a machine has assessed the mental state of an individual, and hasn't made adjustments, when that individual may already be under the DDA because of a mental health disability, and I am. I require reasonable adjustments to your service and the algorithm to account for my disability, and Radar app not being triggered accordingly or I will consider it harassment, which is why I've opted-out of your service and I'm entitled to opt-in when adjustments have been made to said service.
"In each case the conduct must have the purpose or effect of violating a person’s dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for them." << you Radar system does such Samaritans.
I have mental health section 117 free aftercare, and I'll be requesting that I'm being isolated on the Internet by your system, therefore I'll require funding to fight you in court.
Re: SamaritansRadar Twitter monitoring system thoughts
The only individuals that are qualified in supporting those through suicidial thoughts is those who have direct experience of suicide attempts and survived said, from the latest research taken across the Internet. If you're not one of those individuals don't attempt to support those who have suicidal thoughts, as you're not qualified even if you're a professional mental health worker, as that usually amplifies the chance of suicide attempts. The best support is offered by those with experience, eg: they have attempted and survived suicide attempts and live to tell the tail. Suicide stigmatisation is a leading cause of suicide attempts and further research needs to be done in this area, and appropriate action taken by the media to address this very issue. Source of information conference on suicide prevention Australia. Interestingly listening to music and dancing helps to reduce suicide rates!
UK ONS suicide rates have completely variable over the last 10 years, there hasn't been any remarkable difference in reducing suicide rates throughout that period, hence my statement concerning professional mental health services, in fact the most likely time that suicide occurs is 2 days after discharge from psychiatric hospital, read into that as you so wish. I've yet to find any research that shows that intervention by professional mental health services has any impact on suicide rates.
Listerning to the right type of music, may improve mood. Emo music that is listened to by females in the majority has negative impact on mood states instability. It is my opinion that itunes and the like should share their data with research establishments, to establish better research in this area in the digital arena with consent. It would an interesting experiment whether the Radar system, could be unilised by streaming music services to improve mood during periods of distress? Last.fm could be utilised to monitor user listerning habits, I used the service myself mae-3 is my profile on that system and anyone has my permission to analysis said.
UK ONS suicide rates have completely variable over the last 10 years, there hasn't been any remarkable difference in reducing suicide rates throughout that period, hence my statement concerning professional mental health services, in fact the most likely time that suicide occurs is 2 days after discharge from psychiatric hospital, read into that as you so wish. I've yet to find any research that shows that intervention by professional mental health services has any impact on suicide rates.
Listerning to the right type of music, may improve mood. Emo music that is listened to by females in the majority has negative impact on mood states instability. It is my opinion that itunes and the like should share their data with research establishments, to establish better research in this area in the digital arena with consent. It would an interesting experiment whether the Radar system, could be unilised by streaming music services to improve mood during periods of distress? Last.fm could be utilised to monitor user listerning habits, I used the service myself mae-3 is my profile on that system and anyone has my permission to analysis said.
Re: SamaritansRadar Twitter monitoring system thoughts
DDA reasonable adjustments:
1. opt-in and opt-out whenever you like.
2. tick box to identify whether mental disability, those who have disability can Radar only, as they're the only people that can help from latest research (maybe options for this, able, mentally disabled, neutral).
3. There should be a blacklist and whitelist facility at the user level.
4. The Radar system should prompt the suicidal user that they've set it off, with empathy, before doing anything, because that's consent.
In its current form it should be switched off, until these and maybe other modification are programmed into the system.
Generally I personally don't think it's black or white, it just needs to be improved.
1. opt-in and opt-out whenever you like.
2. tick box to identify whether mental disability, those who have disability can Radar only, as they're the only people that can help from latest research (maybe options for this, able, mentally disabled, neutral).
3. There should be a blacklist and whitelist facility at the user level.
4. The Radar system should prompt the suicidal user that they've set it off, with empathy, before doing anything, because that's consent.
In its current form it should be switched off, until these and maybe other modification are programmed into the system.
Generally I personally don't think it's black or white, it just needs to be improved.