
Anthropic to train AI on your chats—How to opt out
What's the story
Anthropic, the artificial intelligence (AI) company behind the Claude family of chatbots, has announced a major shift in its data policy. The firm will now use user data such as chat transcripts and coding sessions for training its AI models. However, this will only happen if users do not opt out of the process. The company also plans to extend its data retention policy to five years for those who don't opt out.
Deadline
Users have until September 28 to make a decision
All Anthropic users will have to make a decision on this new policy by September 28. If they click "Accept," the company will start using their data for training its models and retain it for up to five years. The updated policy applies to "new or resumed chats and coding sessions." However, previous chats or coding sessions that haven't been resumed won't be used for training AI models.
Scope
Updated policy applies to all consumer subscription tiers of Claude
The updated policy will apply to all consumer subscription tiers of Claude, including Free, Pro, and Max. This also includes when they use Claude Code from accounts associated with those plans. However, it won't be applicable for commercial usage tiers such as Claude Gov, Claude for Work, and Claude for Education or API usage via third parties like Amazon Bedrock and Google Cloud's Vertex AI.
Signup process
New users will have to choose their preference during signup
New users of Claude will have to choose their preference during the signup process. Existing users, on the other hand, will have to decide via a pop-up. They can defer this decision by clicking a "Not now" button but will be forced to make a decision by September 28. The pop-up informs about updates in Consumer Terms and Privacy Policy effective from September 28, 2025.
User control
How to opt out?
Users who want to opt out can do so by toggling the switch to "Off" when they see the pop-up. If they accidentally accepted without knowing and want to change their decision, they can go to Settings > Privacy tab > Privacy Settings section > toggle "Off" under "Help improve Claude." Consumers can change their decision anytime via privacy settings, but it will only apply to future data.
Privacy assurance
Anthropic uses tools to filter sensitive data in chats
In its blog post, Anthropic assured users that it uses a combination of tools and automated processes to filter or obfuscate sensitive data. The company also clarified that it does not sell user data to third parties. This is part of their commitment to protect user privacy while still leveraging the power of AI technology for improvement and development purposes.