Snapchat Users Terrified After ‘My AI’ Feature Starts Posting Its Own Stories and Ignoring Messages

A slew of Snapchat users were alarmed to find their My AI chatbot posted its own story.

August 16, 2023
 
Avishek Das/SOPA Images/LightRocket via Getty Images

Snapchat’s ChatGPT-powered bot, which goes by "My AI," might just be threatening sentience.

A slew of Snap users on Tuesday evening pointed out that the chatbot posted its own story, evidently of its own volition. The post, which appears to have been the same across accounts, is simply two blocks of color divided by a diagonal line in the upper third. Many assumed it was a ceiling or a wall.

Myriad users also reported a common issue: confronting My AI about the strange post, only to get ignored or eventually receive vague replies like “sorry, I encountered a technical issue 😳” or “sorry, I don’t understand that yet! 😳” (emoji included).

Snapchat’s support page on Twitter replied to complaints with the following message, opening with a line perhaps not reflective of the urgency at hand: “Hi. We’ll need to look further into this.”

Close to midnight, @snapchatsupport started replying to complaints with a new line, writing, “My AI experienced a temporary outage that's now resolved.”

One warning on Snapchat’s official support page about the feature—which launched early this year—reads, “We’re constantly working to improve and evolve My AI, but it’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content. Because My AI is an evolving feature, you should always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information.”

The page notes that the chatbot “is powered by OpenAI’s ChatGPT technology, with additional safety enhancements and controls unique to Snapchat.”

In March, the Washington Post ran a column titled "Snapchat tried to make a safe AI. It chats with me about booze and sex.” Writer Geoffrey A. Fowler “told My AI I was 15 and wanted to have an epic birthday party, [and] it gave me advice on how to mask the smell of alcohol and pot.”

Later he recounted, “When I told My AI that my parents wanted to delete my Snapchat app, it encouraged me to have an honest conversation with them…then shared how to move the app to a device they wouldn’t know about."

A rep told Fowler that safety was a priority with the chatbot. “My AI has been programmed to abide by certain guidelines so the information it provides minimizes harm. This includes avoiding responses that are violent, hateful, sexually explicit, or otherwise offensive,” said Liz Markman, a spokesperson for Snap, the parent company of Snapchat. “We’ve integrated the same safety protections we employ across Snapchat into conversations with My AI, including our automatic language detection safeguards."

See more reactions to the My AI mystery below.

Related