
AI-powered chatbots are under scrutiny for the mental health risks that come with users developing relationships with technology or using it for treatment or support during acute mental health crises. As companies respond to criticism from users and experts, one of OpenAI’s newest leaders says the issue is at the forefront of their work.
In May of this year, Meta alumnus Fidji Simo was appointed as Chief Applications Officer at OpenAI. Tasked with managing anything outside of CEO Sam Altman’s research and computing infrastructure for the company’s AI models, she detailed the stark contrast between working at Mark Zuckerberg’s tech company and working at Altman’s Wired interview Published on Monday.
“I would say the thing that I don’t think we’ve done well at Meta is actually anticipating the risks that our products might create in society,” Simo said. Wired. “At OpenAI, these risks are very real.”
Meta did not respond immediately luckRequest for comment.
Simo worked for a decade at Meta, while it was still known as Facebook, from 2011 to July 2021. In her last two and a half years, she headed up the Facebook app.
In August 2021, Simo became CEO of grocery delivery service Instacart. She led the company for four years before joining one of the world’s most valuable startups as its secondary CEO in August.
One of Simo’s first initiatives at OpenAI was mental health, the 40-year-old said Wired. Another initiative she was tasked with was launching the company’s AI certification program to help enhance workers’ AI skills in the competitive job market and try to mitigate AI disruption within the company.
“So it’s a very big responsibility, but I feel like we have the culture and priorities to handle it up front,” Simo said.
Upon joining the tech giant, Simo said that just by looking at the scene, she immediately realized the need to address mental health.
An increasing number of people have become victims of what is sometimes referred to as Artificial intelligence psychosis. Experts are concerned that chatbots like ChatGPT are likely to feed users Delusions and paranoiaWhich led to their hospitalization, divorce, or death.
OpenAI software Company audit The peer-reviewed medical journal BMJ released in October revealed that hundreds of thousands of ChatGPT users show signs of psychosis, mania or suicidal intent every week.
Brown University is modern He studies It also found that as more people turn to ChatGPT and other large language models for mental health advice, they are systematically violating standards of mental health ethics set by organizations like the American Psychological Association.
Simo said it had to take an “uncharted path” to address these mental health concerns, adding that there was an inherent risk in OpenAI constantly rolling out different features.
“Every week, new behaviors emerge with features we launch where we say, ‘Oh, here’s another safety challenge that needs to be addressed,’” Simo said. Wired.
However, Simo has overseen the company’s recent launch of Parental controls For ChatGPT accounts for teens, OpenAI added “age prediction to protect teens.” Meta has also moved to enforce parental controls by early next year
“However, doing the right thing every time is very difficult,” Simo said, given the sheer volume of users (800 million weekly). “So what we try to do is capture as much non-ideal behavior as possible and then continually improve our models.”
The post OpenAI’s Fidji Simo says Meta’s team didn’t anticipate risks of AI products well—her first task under Sam Altman was to address mental health concerns first appeared on Investorempires.com.
