OpenAI, the company that created ChatGPT, recently announced that in the coming weeks it plans to roll out a voice recognition feature for its chatbot, which will make its artificial intelligence technology appear even more humanlike than before. Now the company appears to be encouraging users to think of this as an opportunity to use ChatGPT as a tool for therapy.
Lilian Weng, head of safety systems at OpenAI, posted on X, formerly known as Twitter, on Tuesday that she had held a “quite emotional, personal conversation” with ChatGPT in voice mode about “stress, work-life balance,” during which she “felt heard & warm.”
“Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool,” she said.
OpenAI president and co-founder Greg Brockman appeared to co-sign the sentiment — he reposted Weng’s statement on X and added, “ChatGPT voice mode is a qualitative new experience.”
OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do.
This is a disconcerting development. That the company’s head of safety and its president are encouraging the public to think of a chatbot as a way to get therapy is surprising and deeply reckless. OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do — and that messaging could come at the expense of public health.
Weng’s language anthropomorphized ChatGPT by talking about feeling “heard” and “warm,” implying the AI has an ability to listen and understand emotions. In reality, ChatGPT’s humanlike language emerges from its ultra-sophisticated replication of language patterns that draws from behemoth databases of information. This capability is robust enough to help ChatGPTS users conduct certain kinds of research, brainstorm ideas and write essays in a manner that resembles a human. But that doesn’t mean it’s capable of performing many of the cognitive tasks of a human. Crucially, it cannot empathize with or understand the inner life of a user; it can at best only mimic how one might do so in response to specific prompts.
Seeking therapy from a chatbot is categorically different from prompting it to answer a question about a book. Many people who would turn to a chatbot for therapy — rather than a loved one, therapist or other kind of trained mental health professional — are likely to be in a mentally vulnerable state. And if they don’t have a clear understanding of the technology they’re dealing with, they could be at risk of misunderstanding the nature of the guidance they’re getting — and could suffer more because of it.
It’s irresponsible to prescribe ChatGPT as a way to get therapy when these still nascent language learning models have the capacity to persuade people toward harm, as many AI scientists and ethicists have pointed out. For example, a Belgian man reportedly died by suicide after talking to a chatbot, and his widow says the chat logs show the chatbot claiming to have a special emotional bond with the man — and encouraging him to take his own life.









