By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BrandiaryBrandiary
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
Brandiary > Startups > A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

News Room By News Room November 26, 2025 3 Min Read
Share

An OpenAI safety research leader who helped shape ChatGPT’s responses to users experiencing mental health crises announced her departure from the company internally last month, WIRED has learned. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year.

OpenAI spokesperson Kayla Wood confirmed Vallone’s departure. Wood said OpenAI is actively looking for a replacement and that, in the interim, Vallone’s team will report directly to Johannes Heidecke, the company’s head of safety systems.

Vallone’s departure comes as OpenAI faces growing scrutiny over how its flagship product responds to users in distress. In recent months, several lawsuits have been filed against OpenAI alleging that users formed unhealthy attachments to ChatGPT. Some of the lawsuits claim ChatGPT contributed to mental health breakdowns or encouraged suicidal ideations.

Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot’s responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company’s progress and consultations with more than 170 mental health experts.

In the report, OpenAI said hundreds of thousands of ChatGPT users may show signs of experiencing a manic or psychotic crisis every week, and that more than a million people “have conversations that include explicit indicators of potential suicidal planning or intent.” Through an update to GPT-5, OpenAI said in the report it was able to reduce undesirable responses in these conversations by 65 to 80 percent.

“Over the past year, I led OpenAI’s research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?” wrote Vallone in a post on LinkedIn.

Vallone did not respond to WIRED’s request for comment.

Making ChatGPT enjoyable to chat with, but not overly flattering, is a core tension at OpenAI. The company is aggressively trying to expand ChatGPT’s user base, which now includes more than 800 million people a week, to compete with AI chatbots from Google, Anthropic, and Meta.

After OpenAI released GPT-5 in August, users pushed back, arguing that the new model was surprisingly cold. In the latest update to ChatGPT, the company said it had significantly reduced sycophancy while maintaining the chatbot’s “warmth.”

Vallone’s exit follows an August reorganization of another group focused on ChatGPT’s responses to distressed users, model behavior. Its former leader, Joanne Jang, left that role to start a new team exploring novel human–AI interaction methods. The remaining model behavior staff were moved under post-training lead Max Schwarzer.

Read the full article here

News Room November 26, 2025 November 26, 2025
Share This Article
Facebook Twitter Copy Link Print
Previous Article The most marketable college athletes in winter sports: report
Next Article Is ‘Black November’ a problem?
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Spotify Wrapped is for advertisers, too
December 5, 2025
Ruby Is Not a Serious Programming Language
December 5, 2025
What’s happening with social media bans?
December 4, 2025
The Rare Earth Metal Driving Tensions Between the US and China
December 4, 2025
Why Cinemark is testing an industry-first brand campaign
December 3, 2025

You Might Also Like

Ruby Is Not a Serious Programming Language

Startups

The Rare Earth Metal Driving Tensions Between the US and China

Startups

Flock Uses Overseas Gig Workers to Build Its Surveillance AI

Startups

Sam Bankman-Fried Goes on the Offensive

Startups

© 2023 Brandiary. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

Flock Uses Overseas Gig Workers to Build Its Surveillance AI
Blended and branded: The business behind Erewhon smoothie collabs
Disney’s holiday short is a story of friendship and imagination

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?