OpenAI pulls ChatGPT feature that showed personal chats on Google
OpenAI pulls ChatGPT feature that showed personal chats on Google
OpenAI has removed a controversial opt-in feature that had led to some private chats appearing in Google search results, following reporting by Fast Company that found sensitive conversations were becoming publicly accessible. Earlier this week, Fast Company revealed that private ChatGPT conversations—some involving highly sensitive topics like drug use and sexual health—were unexpectedly showing up in Google search results. The issue appeared to stem from arguably vague language in the app’s “Share” feature, which included an option that may have misled users into making their chats publicly searchable. When users clicked “Share,” they were presented with an option to tick a box labeled “Make this chat discoverable.” Beneath that, in smaller, lighter text, was a caveat explaining that the chat could then appear in search engine results. Within hours of the backlash spreading on social media, OpenAI pulled the feature and began working to scrub exposed conversations from search results. “Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” said Dane Stuckey, OpenAI’s chief information security officer, in a post on X. “We’re also working to remove indexed content from the relevant search engines.” We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat… pic.twitter.com/mGI3lF05Ua— DANΞ (@cryps1s) July 31, 2025 Stuckey’s comments mark a reversal from the company’s stance earlier this week, when it maintained that the feature’s labeling was sufficiently clear. Rachel Tobac, a cybersecurity analyst and CEO of SocialProof Security, commended OpenAI for its prompt response once it became clear that users were unintentionally sharing sensitive content. “We know that companies will make mistakes sometimes, they may implement a feature on a website that users don’t understand and impact their privacy or security,” she says. “It’s great to see swift and decisive action from the ChatGPT team here to shut that feature down and keep user’s privacy a top priority.” In his post, OpenAI’s Stuckey characterized the feature as a “short-lived experiment.” But Carissa Véliz, an AI ethicist at the University of Oxford, says the implications of such experiments are troubling. “Tech companies use the general population as guinea pigs,” she says. “They do something, they try it out on the population, and see if somebody complains.”
With Beyoncé's Grammy Wins, Black Women in Country Are Finally Getting Their Due
February 17, 2025
Comments 0