By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Your #1 guide to start a business and grow it the right way…

BuckheadFunds

  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Subscribe
Aa
BuckheadFundsBuckheadFunds
  • Startups
  • Start A Business
  • Growing a Business
  • Funding
  • Leadership
  • Marketing
  • Tax Preparation
Search
  • Home
  • Startups
  • Start A Business
    • Business Plans
    • Branding
    • Business Ideas
    • Business Models
    • Fundraising
  • Growing a Business
  • Funding
  • More
    • Tax Preparation
    • Leadership
    • Marketing
Made by ThemeRuby using the Foxiz theme Powered by WordPress
BuckheadFunds > Startups > The Former Staffer Calling Out OpenAI’s Erotica Claims

The Former Staffer Calling Out OpenAI’s Erotica Claims

News Room By News Room November 17, 2025 3 Min Read
Share

When the history of AI is written, Steven Adler may just end up being its Paul Revere—or at least, one of them—when it comes to safety.

Last month Adler, who spent four years in various safety roles at OpenAI, wrote a piece for The New York Times with a rather alarming title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In it, he laid out the problems OpenAI faced when it came to allowing users to have erotic conversations with chatbots while also protecting them from any impacts those interactions could have on their mental health. “Nobody wanted to be the morality police, but we lacked ways to measure and manage erotic usage carefully,” he wrote. “We decided AI-powered erotica would have to wait.”

Adler wrote his op-ed because OpenAI CEO Sam Altman had recently announced that the company would soon allow “erotica for verified adults.” In response, Adler wrote that he had “major questions” about whether OpenAI had done enough to, in Altman’s words, “mitigate” the mental health concerns around how users interact with the company’s chatbots.

After reading Adler’s piece, I wanted to talk to him. He graciously accepted an offer to come to the WIRED offices in San Francisco, and on this episode of The Big Interview, he talks about what he learned during his four years at OpenAI, the future of AI safety, and the challenge he’s set out for the companies providing chatbots to the world.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we get going, I want to clarify two things. One, you are, unfortunately, not the same Steven Adler who played drums in Guns N’ Roses, correct?

STEVEN ADLER: Absolutely correct.

OK, that is not you. And two, you have had a very long career working in technology, and more specifically in artificial intelligence. So, before we get into all of the things, tell us a little bit about your career and your background and what you’ve worked on.

I’ve worked all across the AI industry, particularly focused on safety angles. Most recently, I worked for four years at OpenAI. I worked across, essentially, every dimension of the safety issues you can imagine: How do we make the products better for customers and rule out the risks that are already happening? And looking a bit further down the road, how will we know if AI systems are getting truly extremely dangerous?

Read the full article here

News Room November 17, 2025 November 17, 2025
Share This Article
Facebook Twitter Copy Link Print
Previous Article Bot activity amplified unproven link between autism and Tylenol on X: report
Next Article Coworking with Justine Melman
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Wake up with our popular morning roundup of the day's top startup and business stories

Stay Updated

Get the latest headlines, discounts for the military community, and guides to maximizing your benefits
Subscribe

Top Picks

Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development
November 30, 2025
Exclusive: Google and McLaren extend partnership deal
November 30, 2025
Jim Beam is taking its American roots to global fans with sports sponsorships
November 29, 2025
Europe Is Bending the Knee to the US on Tech Policy
November 28, 2025
To promote “Bugonia,” Focus Features invited fans to step into the world of alien conspiracists
November 28, 2025

You Might Also Like

Amazon Workers Issue Warning About Company’s ‘All-Costs-Justified’ Approach to AI Development

Startups

Europe Is Bending the Knee to the US on Tech Policy

Startups

There Is Only One AI Company. Welcome to the Blob

Startups

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

Startups

© 2024 BuckheadFunds. All Rights Reserved.

Helpful Links

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Resources

  • Start A Business
  • Funding
  • Growing a Business
  • Leadership
  • Marketing

Popuplar

There Is Only One AI Company. Welcome to the Blob
How brands are aiming to build ‘algorithmic trust’ ahead of an AI agent-powered Black Friday
A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

We provide daily business and startup news, benefits information, and how to grow your small business, follow us now to get the news that matters to you.

Welcome Back!

Sign in to your account

Lost your password?