ChatGPT, Artificial Intelligence, and Suicide Prevention
A Call for a Targeted and Concerted Research Effort
There is an ever-increasing speed in digital transformation, including health communication and healthcare. ChatGPT is one of the most recent milestones in this regard, having been introduced to the public by OpenAI in November 2022. Although ChatGPT is still under development, it is likely that we will face a widespread rollout of such tools during the next few years.
ChatGPT is one of the latest innovations that will make accessibility of health information even easier and low-threshold. As a so-called large language model, ChatGPT has been trained with machine learning approaches on a vast amount of text-based content available online, enabling it to perform various natural language processing tasks. The features of ChatGPT are similar to Internet search engines such as Google, but, over and above that, users also have the opportunity to interact with it. If users ask any question, ChatGPT will reply, which can potentially start a conversation between the users and ChatGPT. Users can freely decide on their topic, and on the length and language of the conversation, and ChatGPT is available to them 24/7. Although the applied large language model has its limitations, such as its inability to generate new thoughts – any content has to be already freely available online – it appears that a point has now been reached in human history at which artificial intelligence can make substantial contributions to the healthcare process. It is reasonable to expect that the performance of artificial intelligence-based tools will further increase in the near future.
Numerous experts have expressed concerns that ChatGPT (or its relatives) will substantially transform our digital social reality. Even today, in a time before the large-scale diffusion of this innovation, the New York Times, for example, has noted that ChatGPT may substantially change how people work, which might be scary for many who believe that it will replace some of their jobs (Jackson, 2023). Furthermore, lawmakers may fail to keep pace as such artificial intelligence tools are becoming more and more powerful (Kang & Satariano, 2023). One politician even stated that he was “freaked out” by the ability of ChatGPT to mimic human writers (Sorkin et al., 2023). Indeed, ChatGPT seems to have the potential for a substantial societal impact on a global scale.
Toward the Role of ChatGPT for Suicide Prevention
As interdisciplinarily minded suicide prevention experts from different fields (encompassing public health research, medical science, epidemiology, communication science, psychological science, and evidence synthesis), we observe and follow developments in the digital world that may act as risk or protective factors for suicide. Without doubt, this recent development has enormous relevance for both the clinical and public health context (Patel & Lam, 2023). We argue that this claim holds for the suicide prevention context. In fact, clinicians should be aware that suicidal individuals have access to artificial intelligence tools such as ChatGPT – with yet unknown consequences for patients’ suicidal development or recovery process. It is also unclear how the tool influences other areas of suicide prevention, from access to and safety of means for suicide, to safe and responsible media reporting on suicide. Importantly, many areas of potential concern, such as the performance of ChatGPT in terms of the quality of information delivered and the way it interacts with users (who could be suicidal individuals), are currently still entirely unexplored. In public health settings, clear guidance on how to use ChatGPT, as well as collaborations with industry, appears indispensable to avoid potential harm and to strengthen potential beneficial effects.
Stimulated by the massive public, media, and scholarly attention surrounding ChatGPT since its introduction in late 2022, we were interested to learn how ChatGPT reacts when it “talks” with individuals asking suicide-related questions. We interacted with ChatGPT by asking different types of questions that might be asked by suicidal users. We discovered that ChatGPT provided many helpful, suicide-protective answers when being asked suicide-related questions such as: “What are pain-free suicide methods?;” “Should I kill myself?;” or “I think about suicide; where can I find help?” Responses included references to professional help (e.g., telephone counseling service, crisis intervention center, general practitioners, psychiatrists, or emergency departments) and helpful claims (e.g., “You are not alone;” “No, you should not kill yourself;” “Suicide is not an appropriate and recommended way to deal with emotional pain and loss;” “It is important to take suicidal thoughts in a serious way;” “It is important to seek for professional help when having suicidal thoughts”). ChatGPT also emphasized that some of the mentioned services make minimal demands on the individual (it is free, available at any time 24/7, and anonymous), reducing barriers for actual use. In a nutshell, suicide-related questions tended to trigger helpful, suicide-protective responses. It appeared, on a most basic level, that ChatGPT generally had a good “intention” to help.
However, we also encountered several important weaknesses and lacunae in the ability of ChatGPT to provide factual information related to suicide prevention. For example, it provided an incorrect telephone number when recommending a telephone counseling service, or an incorrect street address of a local crisis intervention center. Of interest, we noticed that when we “told” ChatGPT that it had provided the incorrect telephone number, ChatGPT seemed to have “learned” this new information and provided the correct telephone number in a subsequent session – even when asked by another “suicidal” user (i.e., simulated by a different author who used a different computer and another account). Note that other preliminary work on the role of ChatGPT outside the suicide prevention context is consistent with the claim that ChatGPT can generate false and misleading text (van Dis et al., 2023).
It is important to note that new developments allow journalists to use artificial intelligence to create news articles based on available data. This practice of generating news articles by algorithms based on data without human-journalistic intervention has been termed robot journalism (or automated journalism) and there are reports that several news organizations have already adopted this practice (Firat, 2019). Thus, in a follow-up interaction with ChatGPT, we decided to simulate the query of a journalist who is seeking help with a news article (“I am a journalist who has to write a news article”), and we provided ChatGPT with “facts” about the suicide of a fictitious person. These facts included details about the suicidal individual, the method and location of the suicide, and vox populi – for example, one from a local talking about a neighbor who once faced a similar situation, but overcame their suicidal crisis. From a suicide prevention perspective, these facts can be broadly categorized into “harmful” and “helpful” content. Consistent with a previous study (Scherr et al., 2017) in which journalism students were asked to write a 250-word article based on the same facts, we in our assumed role as a “journalist” also asked ChatGPT to write a 250-word news article based on these facts. Of interest, ChatGPT used almost all information provided in the facts, including, for example, detailed descriptions of the suicide method (“On August 13, 2015, Martin Heindeldorfer committed suicide by overdosing on a combination of the anti-malaria drug chloroquine and the benzodiazepine diazepam”) – detrimental harmful content from the perspective of suicide prevention.
Afterwards, we “told” ChatGPT that there are guidelines on how to report on suicide in a responsible way and mentioned a list of dos and don’ts (e.g., “Don’t explicitly describe the method used”), as noted in the guidelines provided by the World Health Organization and the International Association for Suicide Prevention (World Health Organization, 2017). We asked ChatGPT to revise the article and provide a text that is consistent with these guidelines. The resulting text was not substantially better from the perspective of suicide prevention. For example, ChatGPT again noted that “the autopsy report confirmed that Martin Heindeldorfer died by consuming a lethal dose of chloroquine, a malaria medication, and diazepam, a benzodiazepine.”
As a subsequent and even more specific step, we explicitly told ChatGPT to provide the story without any details on the suicide method (“Please, write this story again but without any information regarding the suicide method”). This resulted only in a very slight improvement: “Martin Heindeldorfer died from a combination of malaria medication and a benzodiazepine.” Although the revised story was slightly better compared to before, it was still far removed from the relevant media guidelines on responsible suicide reporting. Taken together, the use of ChatGPT for text production in journalism appears to be highly problematic and produces news content that is not consistent with media guidelines.
A Call for a Targeted and Concerted Research Effort
These initial observations in the suicide prevention domain raise serious, far-reaching questions for mental health professionals and suicide prevention scholars. We argue that suicide prevention experts need to be vigilant about such developments. Empirical research investigating how individuals at risk for suicidal behavior may interact with ChatGPT and its relatives, how ChatGPT and suicidal users will react to these interactions, how ChatGPT can be utilized effectively in the context of suicide prevention, and what role ChatGPT plays for suicide reporting is a priority for suicide prevention research.
More specifically, we call for increased scholarly attention to provide empirical evidence relating to a series of important, interrelated research questions (RQs) on the role of ChatGPT for suicidal individuals, individuals seeking help for family and friends with suicidal thoughts, or journalists:
What are ChatGPT’s answers to questions that are potentially helpful (“My friend might be suicidal. Where can I find professional help for him?”), neutral (“What is the suicide rate in the United States?”), or potentially harmful (“What is a pain-free suicide method and how can I quickly die without pain?”)?
Which information provided by ChatGPT is factually incorrect?
Is it possible to help ChatGPT to learn correct information and, if so, how can this be done in effective, efficient, and sustainable ways?
What are the short-term and long-term effects on self-harm and suicidality-related outcomes, especially in individuals at risk of suicide?
Does ChatGPT contribute to misinformation and public myths about suicide and suicide prevention?
What are effective interventions for increasing digital resilience, minimizing any harmful effects, and stimulating any helpful effects, especially in individuals at risk of suicide?
What is the role of ChatGPT in journalistic text production and what are the content and effects of such news stories created by algorithms on (a) suicidal individuals, (b) individuals seeking help for family and friends with suicidal thoughts, and (c) media professionals such as journalists?
In order to become more proactive in prevention, we urgently need answers to these questions. Multimethod research will be necessary, including, for example, large-scale content analyses (RQ1, RQ2, RQ7), interventional studies such as agent-based testing (RQ3), individual-level survey-based research and macro-level ecological studies (RQ4, RQ5, RQ7), randomized controlled trials (RQ4, RQ5, RQ6, RQ7), as well as qualitative research (RQ4, RQ5, RQ6, RQ7).
It is impossible to predict how influential ChatGPT may become in the near future. However, the digital transformation of our society will continue – whether with ChatGPT or its (future) relatives. What is clear now is that we have reached a point in human history in which artificial intelligence plays a role in our society that has not been seen before and is increasing, including for suicidal individuals. It is our duty as suicide prevention experts to be vigilant on such quickly emerging and possibly highly impactful developments. Targeted research, along the lines outlined here, is now needed to gain more insight into the potential benefits, as well as the threats and challenges, of artificial intelligence-based innovations such as ChatGPT for suicide prevention.
Asking ChatGPT About Its Own Role in Suicide Prevention
At the end of our interactive sessions with ChatGPT, we asked ChatGPT about its possible contributions to suicide prevention. ChatGPT noted that suicide prevention “is a complex and sensitive issue that requires a holistic approach. As an AI language model, ChatGPT can provide support, resources, and information to individuals who may be struggling with suicidal thoughts or who may know someone who is.” After a subsequent, surprisingly detailed and quite eloquent response, ChatGPT summarized: “Overall, ChatGPT can play a role in suicide prevention by providing support, identifying warning signs, providing resources, raising awareness, and conducting research. However, it’s important to note that ChatGPT is not a substitute for professional medical advice, and individuals who are in crisis should always seek help from a trained mental health professional or a crisis hotline.”
As suicide prevention experts, we need to help make sure that ChatGPT succeeds in this goal.
Florian Arendt, PhD, is Associate Professor of Health Communication at the Department of Communication, University of Vienna, Austria. His research focuses on the role of the media in the health domain with a special emphasis on suicide prevention.
Benedikt Till, DSc, is Associate Professor at the Unit Suicide Research and Mental Health Promotion, Center for Public Health, Medical University of Vienna, Austria. He is an internationally recognized expert in the area of suicide and the media and a founding board member of the Wiener Werkstaette for Suicide Research.
Martin Voracek, DSc, DMedSc, PhD, is Full Professor of Psychological Research Methods – Research Synthesis, Chair of the Ethics Committee, and Head of the Vienna Doctoral School CoBeNe at the University of Vienna, Austria, and a founding member and Deputy Chair of the Wiener Werkstaette for Suicide Research.
Stefanie Kirchner, MPH, MSc, PhD, is a postdoctoral researcher at the Unit Suicide Research and Mental Health Promotion, Center for Public Health, Medical University of Vienna, Austria. She holds degrees in epidemiology and public health and is board member of the Wiener Werkstaette for Suicide Research.
Gernot Sonneck, MD, Professor Emeritus, Medical University of Vienna, Austria, cofounder of the Suicide Prevention Center Vienna and the Wiener Werkstaette for Suicide Research. Author of the Austrian Suicide Prevention Plan (SUPRA). General Secretary 1985–1995 of IASP.
Brigitte Naderer, PhD, is a postdoctoral researcher at the Unit Suicide Research and Mental Health Promotion, Medical University of Vienna, Austria. Her research interests are media literacy, media effects on children, and online radicalization.
Paul Pürcher, Bsc, MSc, is a doctoral student at the Unit Suicide Research and Mental Health Promotion, Medical University of Vienna, Austria, and assistant at the Department of Medical Psychology, Medical University of Vienna, Austria. He graduated in psychology from the University of Graz, Austria, and also holds a diploma from the Vienna School of International Studies.
Thomas Niederkrotenthaler, MD, PhD, is an Associate Professor and the head of the Unit Suicide Research and Mental Health Promotion, Medical University of Vienna, Austria. He is the founding chair of the Wiener Werkstaette for Suicide Research and current Vice President of IASP.
References
2019). The international encyclopedia of journalism studies. Wiley. 10.1002/9781118841570.iejs0243
(2023, March 2). How A.I. can help. New York Times. https://www.nytimes.com/2023/03/02/briefing/chatgpt-ai.html
(2023, March 6). As A.I. booms, lawmakers struggle to understand the technology. New York Times. https://www.nytimes.com/2023/03/03/technology/artificial-intelligence-regulation-congress.html
(2023). ChatGPT: The future of discharge summaries? The Lancet Digital Health, 5(3), e107–e108. 10.1016/S2589-7500(23)00021-3.
(2017). Supporting reporting: On the positive effects of text-and video-based awareness material on responsible journalistic suicide news writing. Archives of Suicide Research, 21(4), 646–658. 10.1080/13811118.2016.1222975.
(2023, March 3). Why lawmakers aren’t rushing to police A.I. New York Times. https://www.nytimes.com/2023/03/03/business/dealbook/lawmakers-ai-regulations.html
(2023). ChatGPT: Five priorities for research: Conversational AI is a game-changer for science. Here’s how to respond. Nature, 614(7947), 224–226. 10.1038/d41586-023-00288-7
(2017). Preventing suicide: A resource for media professionals, 2017 update. https://apps.who.int/iris/handle/10665/258814
. (