Skip to main content
Free AccessEditoral

The Opportunities and Challenges of Regulating the Internet for Self-Harm and Suicide Prevention

Published Online:https://doi.org/10.1027/0227-5910/a000853

Working to influence government policy on suicide prevention in the United Kingdom usually means working with the government Department of Health and Social Care around the cross-government suicide prevention strategy in England, and likewise in Northern Ireland, Scotland, and Wales. Rarely does an opportunity come along that involves influencing legislation, but this is what is happening now. In the age of increasing digital innovation and communication, the conservative party made a commitment in its manifesto to make the United Kingdom the safest place in the world to be online while defending freedom of expression (Conservative and Unionist Party, 2019). The government has been consulting on how to do this since April 2019 (HM Government, 2019), with consultation responses published in February 2020 and December 2020 (Department for Digital, Culture, Media & Sport, [DCMS], 2020). The result is the Draft Online Safety Bill 2021 (DCMS, 2021), which was published in May 2021 and outlines a new regulatory framework for tackling harmful content online. The Draft Bill is due to go through the UK parliament in 2022, but even once it is passed into law, there will need to be a series of codes of practice (guidance provided by the regulator who enforces the provisions of the Bill), which give more detail on what content is covered and what types of action are expected from online services. The big question for self-harm and suicide prevention is whether this new legislation poses a challenge or an opportunity.

The Nature and Impact of Self-Harm and Suicide Content Online

Self-harm and suicide content online can take many forms, including visual and descriptive user-generated posts, news articles and other media, and content produced by political, charitable, and healthcare organizations. This content can include online memorials, depictions of methods of self-harm or suicide, lived experience accounts, online challenges, images of scars or wounds, and stories of hope and recovery. It is presented over a range of different online platforms including social media sites, online forums, and gaming sites, to name a few. The impact of engaging with online content related to self-harm and suicide is complex and evidence around these issues, although emerging, remains limited.

Many studies have reported that the Internet can be a lifeline for individuals who experience self-harm and suicidal feelings as it allows them to access emotional support (Davis & Lewis, 2019; Lavis & Winter, 2020; Mok et al., 2016) and practical information and advice (Lavis & Winter, 2020). Given that help-seeking among individuals who experience suicidal feelings and behavior is often low (Biddle et al., 2004), the Internet presents unique opportunities for suicide prevention. However, these online spaces can also expose vulnerable users to distressing or harmful content that risks triggering or exacerbating their self-harm or suicidal feelings. It may result in contagion effects (Arendt et al., 2019; Marchant et al., 2017), competition between users (Marchant et al., 2017), and increased knowledge about the availability and lethality of particular suicide methods (Biddle et al., 2012). Potentially harmful content includes detailed information about methods of self-harm or suicide as well as "pro-suicide" discourses that can encourage and normalize these behaviors. However, existing evidence on what constitutes harmful content, when and for whom, is mixed (Marchant et al., 2017) and further research is needed.

Crucially, research has shown that exposure to self-harm and suicide-related content online is widespread among the general population. In England, a population-based study of young people aged 21 years reported that 22.5% had engaged in self-harm and suicide-related Internet use (Mars et al., 2015). Moreover, this study found that 42.1% of young people who experienced nonsuicidal self-harm and 70.2% of young people who had made a suicide attempt also reported self-harm or suicide-related Internet use. A national inquiry into suicides by children and young people found evidence of suicide-related Internet use in more than one quarter (26%) of deaths in under 20s, and 13% of deaths in 20–24-year-olds (NCISH, 2017). In addition, Padmanathan et al. (2018) found that the prevalence of self-harm and suicide-related Internet use was 8.4% among adults who presented to a hospital emergency department following self-harm or attempted suicide, and this rose to 26% among young people aged under 18. While it is difficult to draw causal conclusions, these findings show that self-harm and suicide-related Internet use is highly prevalent among people who experience suicidal thoughts and engage in suicidal behavior. Perhaps this is unsurprising given that the majority of us use the Internet to find out about most things in our daily lives, but if we do not have a full understanding of what content is being engaged with and the impact of this engagement, then where does this leave proposed legislation in this area?

Proposed Scope of the Online Safety Bill

The Draft Bill (DCMS, 2021) sets out which online services are going to be covered and what type of content will be in scope. In summary, it proposes:

  • All user-to-user services and search services will be covered by the legislation: This means anything where users can communicate with each other (except comment sections on news media websites), as well as search engines.

This scope is large but attempts to take a balanced approach to harms, which means not all services are covered in the same way. The Bill uses categories of illegal content and legal but harmful content and places a duty of care on services to address these, depending on their reach (i.e., how many users) and functionality.

  • For content that is illegal: Every service or platform will be required to take action on it regardless of its size.
  • For content that is legal but harmful: All services will have to show they are protecting children (defined as under 18s), by putting in place measures to ensure children cannot access harmful content on their platform. Platforms can choose how they do this. For example, they might put in place measures to remove content that is legal but harmful. Alternatively, they might exclude children from accessing their service (e.g., through age verification technology).
  • And an extra duty of care will be placed upon the services with largest reach and highest functionality, which are known in the Bill as "Category 1 services" (e.g., social media giants such as Facebook, Twitter, and search engines like Google). These Category 1 services will be required to protect adults as well as children from priority legal but harmful content.

There is a large amount of content that could be legal but harmful across a wide range of topic areas. The legislation will not seek to cover all legal but harmful content, but will set out some priority areas to be covered by the Bill. Ireland is a bit further ahead than the United Kingdom in bringing forward legislation on this issue and has already proposed that suicide is a priority area of content in its legislation. In December 2021, a Parliamentary Committee that was established to undertake pre-legislative scrutiny of the Draft Bill published its report (Joint Committee on the Draft Online Safety Bill, 2021) with recommendations for changes to the Draft Bill. This called for the overhaul of the proposed categorization of services in the Draft Bill, and suggested a more nuanced approach recognizing factors such as risk, reach, user base, and safety performance. At the time of writing this editorial, the government had not yet responded to this report, but in February 2022, it announced that suicide would be a priority area of illegal content, which is a good first step. There is, however, a continuing high level of debate about how to make the proposed legislation effective and practical.

Challenge of Defining Types of Content

For content related to suicide and self-harm, one of the key challenges is going to be defining what falls into the different categories of illegal and legal but harmful.

The Bill covers the whole of the United Kingdom but definitions of illegal content relating to suicide differ across different jurisdictions. In England, Wales, and Northern Ireland, encouraging suicide is illegal (Suicide Act, 1961; Criminal Justice Act [Northern Ireland], 1966), and thus any content online that encourages suicide would be illegal and therefore every online service in the scope of the Bill would have to address it.

In England and Wales, the Law Commission has recently published recommendations for reforming the communications offences to tackle serious harms that result from online abuse (Law Commission, 2021), including recommending a new offence of encouraging or assisting serious self-harm. If this proposed offence becomes law, then it would tip more content that encourages this behavior into the illegal category.

The new Online Safety Bill will be UK-wide, but without parallel offences around encouraging suicide or self-harm in every jurisdiction of the United Kingdom, it is unclear whether this means there will be differences in which content is deemed illegal depending on the jurisdiction in which it is posted online. Therefore, one of the first challenges is going to be understanding how a UK-wide law will apply in different jurisdictions across the United Kingdom. However, for larger platforms with a global reach, it is fair to assume that they will take a UK-wide approach to the issue anyway, and probably a Europe-wide approach.

The next major challenge for self-harm and suicide-related content is the definition of legal but harmful content. Research undertaken by Samaritans and the University of Bristol sought to understand how people use the Internet when they are suicidal, and showed the complexity of this area of content (Biddle et al., 2018). The impact of exposure to certain types of self-harm or suicide content online may vary for different people (Mok et al., 2016). This means that content that could be harmful to one person at one point in time may not be harmful to another person, or may not be harmful to the same person at a different point in time.

The Draft Bill (DCMS, 2021) includes a broad definition of "content that is harmful to adults." Section 46(3) states that content is considered to be harmful to adults if "there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities."

However, this does not give us much to go on, and thus the regulatory framework, which will include a set of codes of practice, will be critical for defining what should be deemed as harmful for the priority areas of content. It has already been announced that Ofcom is going to be the regulator, and while Ofcom can be starting some preliminary work on these definitions, it will take some time to get the codes written. With the Bill likely to be introduced to parliament in 2022, with potentially a year for it to be passed into law, it is hard to see how the codes of practice will be in place before Spring 2024 at the earliest. The timing of this is itself a challenge, and those concerned about harms taking place now are frustrated that it will still be several years before services have a duty to act. It will be essential that Ofcom works closely with subject matter experts, including those with lived experience of self-harm and suicide, to inform its codes of practice. The codes will need to be reviewed regularly as our understanding of the impact of self-harm and suicide content deepens, technology evolves, and new online issues emerge. If self-harm is also deemed a priority in the Bill, as has recently been announced for illegal suicide content, then the lead-in time to the new codes of practice provides us all with an opportunity to improve our articulation of what is harmful and helpful before these codes get developed.

Steps will also need to be taken by government, researchers, and platforms to monitor the unintended and potentially harmful consequences of implementing these codes of practice. Will the new legislation result in people moving into even darker spaces online to find and share content? Will removing and banning more content lead to stigmatization of self-harm and suicide, and reduce help-seeking possibilities for people who need them? With a limited existing evidence base in this area, we asked people with lived experience for their views and present an overview of what they told us in the next section.

Views of People With Lived Experience of Suicidal Thoughts, Attempts, and Bereavement

I had some really negative experiences of using the Internet to look at self-harm content when I was younger. Unfortunately, I was unaware of how this could actually be making things worse, and I continued to access these pages for a long time. Despite the content being incredibly dangerous and triggering, I kept going back as it provided me with a feeling of belonging, and a place to seek advice. This was something I was really lacking in real life, and being able to interact with like-minded strangers felt really appealing to me at the time.

Having learned from my experience, I'm now super careful with what I look at online – my feed is now a place where I get helpful information about mental health, funny memes to cheer me up, inspiration for recipes, and it actually feels like a positive and helpful tool, but it scares me to think that other people out there might still be exposed to the darker side of the Internet. I just wish there had been more safeguards in place, so I could have got the connection and support I needed without being exposed to the explicit content and unsolicited advice that led to more harm than good.

Ellie

This quote clearly shows the challenge and opportunity that the online environment provided Ellie with, and echoes findings from a small insights survey undertaken by Samaritans in 2021 with 96 members of our lived experience panel. The survey explored what people thought of the UK government's proposals in the Draft Online Safety Bill 2021. Survey respondents were aged 18–65+ years (7% aged 18–24 years, 42% aged 25–44 years, 45% aged 45–64 years, 6% aged 65+ years), and most had lived experience of suicidal thoughts (92%), self-harm (66%), suicide attempts (63%), and bereavement by suicide (27%). Of these respondents, 73% were female, 22% were male, and 5% identified as nonbinary. They told us the following:

New Laws Are Needed to Tackle Online Harms

"It is too unregulated and like the Wild West out there."

Most respondents (78%) agreed that new laws are needed to make online spaces (such as social media sites and forums) safer for users. Respondents explained that the current system is "not fit for purpose" and new laws are needed to regulate harmful content such as sharing of suicide methods, and graphic images or videos of self-harm or suicide. Respondents also wanted social media companies to be accountable for the harmful content they host on their platforms.

Laws Need to Include Smaller Sites and Forums

"If you drive people away from popular and regulated social media sites you might push them underground where ‘pro' content could become more rife and extreme."

More than three quarters of the respondents (77%) agreed that proposals in the Draft Bill – which only cover the largest and most popular platforms when it comes to legal but harmful content and over-18s – should include smaller online spaces. Respondents warned that these plans risk pushing harmful content and vulnerable users to smaller sites that are less likely to provide moderation and support. Many also felt that the content hosted on smaller sites is often more dangerous due to increased anonymity and the presence of particularly graphic content.

Certain Types of Harmful Content Should Be Removed

A total of 77% of the respondents felt that online platforms should remove certain types of self-harm and suicide-related content. Respondents cited three categories of content that they felt platforms should take steps to remove. The first category was graphic image-based content such as explicit depictions of self-harm and livestreams of suicides or suicide attempts. The second category referred to detailed advice and information about methods of self-harm or suicide. The final category was broader and concerned any content that appears to encourage or glamorize self-harm or suicide.

Online Spaces Can Also Be Beneficial

"To have another person say ‘I get it' is so validating."

Despite these concerns, almost three quarters (73%) of the respondents agreed that online spaces can be helpful for individuals experiencing self-harm and suicidal feelings. Respondents stressed that online spaces provide "life-saving peer support" and allow people to feel less alone and share advice for coping with distress. They also reported that online peer support often fills gaps in mental healthcare services, as it can be accessed while waiting to receive more formal support. Some respondents commented that online content (e.g., news articles) can be useful for raising awareness positively about self-harm and suicide, and lived experience accounts of recovery can offer hope to others.

We Need to Recognize the Risks of Increasing Legislation in This Area

"The last thing vulnerable people need is to see our words disappear from the Internet because our distress is criminalized."

Many respondents felt it is important to ensure that these benefits of online spaces are not compromised by new laws. Some respondents were concerned about the impact of legislation on vulnerable users who might be experiencing extreme mental distress. Many respondents also felt that it is important to ensure that tackling harmful content does not inadvertently increase shame and stigma among individuals who experience self-harm or suicidal thoughts.

We Need to Adopt Strategies to Make Online Spaces Safer

To make the Internet safer, 74% of the respondents felt that all online spaces should display helpline numbers or signposting information related to suicide or mental health support. In addition, 67% believed that all online spaces should have moderators trained in mental health awareness. Many respondents agreed that all online spaces should remove content that contains detailed descriptions of harm (63%) and that they should provide users with self-care resources (57%). However, only half of respondents felt that online spaces should censor potentially harmful content relating to self-harm and suicide (51%).

These findings are consistent which those from a survey undertaken by the National Suicide Prevention Alliance in September 2021.

Conclusion

Thus, this brings us back to the question: Is the forthcoming Bill an opportunity or challenge for self-harm and suicide prevention? The views of people with lived experience and the emerging evidence base are clear. It is likely to be both. It is clearly an important opportunity to reduce access to harmful self-harm and suicide content but there are two big challenges.

Scope: With the scope as drafted, it is likely that smaller platforms and services will not have to do anything about legal but harmful content as it relates to adults. Services such as Wikipedia that have high reach, high trust, but very low functionality may not fall under "Category 1," which means as long as children cannot access harmful content on it, then they do not need to take any action. This is clearly concerning when you think about detailed instructional method information that can cause harm.

Definitions of harm: There is a significant challenge ahead for Ofcom in defining what is harmful content related to suicide and self-harm, with limited evidence and the potential for this to vary across people and depending on the level of distress of the person engaging with the content. Taking what is unequivocally harmful content as a starting point would be a useful approach.

But even aside from these challenges and focusing only on the opportunity, legislation on its own will not be enough. Although implementing a robust legislative framework will help to regulate self-harm and suicide-related content online, it is important to remember that this is only one piece of the puzzle. If we are to ensure that the Internet becomes a safer space for vulnerable users, it is essential that we take a multifaceted approach. This requires ensuring that people of all ages are informed and given the tools they need to keep themselves safe online when engaging with self-harm and suicide-related content. It also means that professionals, such as mental health practitioners, teachers, and social workers, need to have the confidence and skills needed to support people around their online activities and signpost them to supportive online spaces.

Reflecting this need for a multifaceted approach, in 2019 Samaritans launched our Online Excellence Programme in collaboration with the UK Department of Health and Social Care and digital sector partners to create a hub of excellence in online suicide prevention. The aim of the program is to reduce access to harmful content online and increase opportunities for support for vulnerable users. We have produced industry guidance for sites and platforms on managing self-harm and suicide content safely, which we believe provide a strong starting point for Ofcom, as well as having an advisory service for platforms and professionals' work in the space. We also have resources for users and those around them to help them create and share content safely.

Improving Internet safety will also require more sophisticated artificial intelligence technology to detect and manage self-harm and suicide-related content online quickly and effectively, and for this technology to be available to platforms of all sizes and capacities. Those responsible for content on social media platforms and online forums need to be educated to implement the codes of practice in a safe way so that they remove content safely and they sensitively direct users to appropriate support.

Finally, there is an urgent need to fund and commission research that addresses gaps in knowledge about what types of online content are most likely to cause harm, when, and to whom. Only by adopting a multifaceted approach toward online harms can we build safer online communities that harness the power of the Internet to deliver effective self-harm and suicide prevention.

Jacqui Morrissey, MSc, is the assistant director for research and influencing at Samaritans, the leading suicide prevention charity working across the UK and Ireland, and co-chair of the National Suicide Prevention Alliance in England. She is committed to ensuring the translation of research into practice in order to help prevent suicide.

Laura Kennedy, MPhil, is the research and evidence officer for the online harms team at Samaritans. She is also a PhD candidate in criminology at the University of Cambridge. Her primary research interests are suicide prevention, peer influence, and adolescence.

Lydia Grace, PhD, is the research and policy program manager at Samaritans. She leads on our Online Excellence Programme, working with government and industry to develop a hub of excellence in online suicide prevention. Her PhD explored memory and self-concept in depression and her research interests include suicide prevention and online interventions.

References