"When you use Claude Code, we collect feedback, which includes usage data (such as code acceptance or rejections), associated conversation data, and user feedback submitted via the /bug command."
So I can opt out of training, but they still save the conversation? Why can't they just not use my data when I pay for things. I am tired of paying, and then them stealing my information. Tell you what, create a free tier that harvests data as the cost of the service. If you pay, no data harvesting.
Even that is debatable. There are a lot of weasel words in their text. At most they're saying "we're not training foundation models on your data", which is not to say "we're not training reward models" or "we're not testing our other-data models on your data" and so on.
I guess the safest way to view this is to consider anything you send them as potentially in the next LLMs, for better or worse.
When they ask "How is Claude doing this session?", that appears to be a sneaky way for them to harvest the current conversation based on the terms-of-service clause you pointed out.
That's not just them saving it locally to like `~/.claude/conversations`? Feels weird if all conversations are uploaded to the cloud + retained forever.
So I can opt out of training, but they still save the conversation? Why can't they just not use my data when I pay for things. I am tired of paying, and then them stealing my information. Tell you what, create a free tier that harvests data as the cost of the service. If you pay, no data harvesting.