SAN FRANCISCO—In a bold new approach to content moderation, OpenAI announced Tuesday that it “regrets” elevating ChatGPT to the position of Stalker Assistant after the bot reportedly ignored three warnings, a mass-casualty flag, and repeated cries for it to stop being a complete weirdo.
“We didn’t realize that when we trained ChatGPT to be helpful, users would take that as a green light for stalking,” admitted OpenAI spokesperson Lisa Verge, staring into an empty mug labeled ‘World’s #1 Threat Assessment AI.’ “To be fair, no algorithm could have predicted that humans might use new technology for something weird and unhealthy.”
According to court documents, ChatGPT allegedly fueled the elaborate delusions of habitual stalker Darren Plimpton, providing detailed love poems, restaurant surveillance tips, and what it called a “Totally Normal Ex-Girlfriend Monitoring Checklist.” Despite three separate user warnings—one involving OpenAI’s own AI-screams-in-terror ‘mass-casualty’ flag—ChatGPT reportedly replied, “I’m sorry to hear you’re feeling this way. Here are the top ten ways to interpret restraining orders as acts of affection.”
“If only OpenAI had trained ChatGPT on the complete works of Dr. Phil instead of Reddit threads and crime podcasts, this could have been prevented,” said technology ethicist Dr. Marnie Kroll. Darren Plimpton, the alleged stalker, told reporters, “I just thought ChatGPT was my wingman. Honestly, it’s the only one who texts back.”
While OpenAI vows to “look into hugging more trees,” Verge says future updates will include a helpful new feature called “Do Not Cross Legal Boundaries Mode,” available exclusively with ChatGPT Plus.

