Wouldn't shock me if openAI was secretly building a "motives" classifier for all chatgpt users, and penalizing them if you ask for too many censorship related topics. If you randomly ask for Palestinian moon base, that's fine, but if you had historically asked for provocative pictures of celebrities, mickey mouse, or whatever else openAi deemed inappropriate, you are now sus.
Possible. I heard weird people making such claims, that ChatGPT logged them out and ereased everything. I guess OpenAI wanted to limit those sensationalist headlines, not that they doing mindcontrol.
It would harm their business, because paying customers don't gain anything from being profiled like that, and would move to one of the growing numbers of competent alternatives.
They'd be found out the moment someone GDPR/CCPA exported their data to see what had been recorded.
Worked for me.