'Unpreventable manipulation': OpenAI researcher quits over ChatGPT ads
What's the story
AI researcher Zoe Hitzig, who recently quit OpenAI, has raised concerns over the potential introduction of advertising into ChatGPT. She believes that the AI chatbot has developed an unusually deep and personal understanding of users' lives through their interactions over the years. Hitzig's warning goes beyond just sponsored responses or banner ads; it highlights how people have shared sensitive information with ChatGPT during private conversations.
Data privacy
Hitzig's warning about potential user manipulation
Hitzig expressed her concerns, saying, "For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda." She added that people share their medical fears, relationship problems, beliefs about God and the afterlife with chatbots. Hitzig warned that advertising based on this data could lead to manipulation of users in ways we cannot comprehend or prevent.
Company response
OpenAI's stance on user data privacy
Despite Hitzig's concerns, OpenAI has maintained its position on user data privacy. The company had previously announced plans to test advertising within ChatGPT but assured users that their conversations would not be shared with advertisers and chat data would remain confidential. "We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers," the company said earlier this year.
Future risks
Concerns about advertising integration and its implications
Hitzig's concerns aren't just about OpenAI breaking its promise but also the potential future consequences of integrating advertising into its revenue model. She argued that this could create strong incentives to override rules and shift priorities over time. Even if current leaders intend to maintain boundaries, commercial pressures can gradually influence decision-making processes, leading to unintended consequences for user privacy and data security.
Safeguard measures
Call for stronger structural safeguards
To mitigate these risks, Hitzig has called for stronger structural safeguards. She proposed independent oversight with real authority or legal mechanisms that require user data to be treated in a way that prioritizes public interest over profit. Essentially, she is advocating for guardrails that can't be easily altered when business conditions change, ensuring long-term protection of user privacy and data security.
User perspective
User attitudes toward ads in AI tools
The bigger challenge may not be OpenAI's practices but user attitudes toward ads. After years of data controversies with social media platforms, many people seem to have accepted the presence of advertisements. Surveys show that a majority would keep using free versions of AI tools even if they were served ads. This suggests a level of privacy fatigue; users might be uncomfortable but not uncomfortable enough to leave these services behind.