The consent screen says “upload it to our cloud on an ongoing basis” and “analyzed by meta AI”. To me that seems like a reasonable level of explanation for non-technical users. Most people don’t know what it means to “train” an AI, but reading that meta is processing the photos in the cloud and analyzing them with AI gives them some picture.
This isn’t buried. The user has to see the screen and click accept for their photos to be uploaded.
Compared to the usual buried disclaimers and vague references to “improving services,” consenting to 1000 things when you sign up for an account, this is pretty transparent. If someone is concerned, they at least have a clear opportunity to decline before anything gets uploaded.
It’s just surprising to me that people look at this example of Facebook going out of their way to not do the bad thing and respond with a bunch of comments about how they doing the bad thing.
This is a pretty generous take. You even highlight most people won't know what this means and then handwave away the concerns of people who DO know what it means and assert most people won't accept it if they did understand it.
> assert most people won't accept it if they did understand it
I didn’t make that assertion. I think most people don’t care if their photos are used to train an AI model as long as Facebook doesn’t post the photos publicly. Fundamentally, I care if people see my photos, and don’t care if computers see them. But I’m aware some people dislike AI and/or have strong beliefs about how data should be used and disagree. It makes sense to give those people an opportunity to say no, so it seems like a good thing that the feature is opt-in rather than an opt-out buried in a menu.
People are not going to understand it that way. You know it, I know it, and Facebook knows it. Don’t excuse them for hiding what they’re doing on the basis that people don’t know what it means anyway. I’m pretty sure the average moron can understand “training AI,” considering that both “training” and “AI” are pretty common concepts. Sure, they won’t be able to explain gradient descent and whatever, but “training AI” is something people will recognize as using your data to improve their stuff.
Granted, many people could guess what “train” means, but it’s not obvious if on average people will be more likely to read and understand that than the words “analyze” and “create ideas” they choose to use instead.
In context, those sound like things they’re going to do for you. People are not going to understand this as “we’re going to use your stuff for our own purposes unrelated to the services you get.”
Here’s the thing. Even if we grant your idea that maybe this is more understandable, why would that be reasonable? Facebook employs a lot of very smart people and has enormous resources. I’m confident they could come up with wording that would make this very clear to everyone. I mean, “we will use your photos to build our next generation AI systems” is a lot clearer than what they have here, and I just came up with that on the spot. That they haven’t done so is a deliberate choice.
According to the company spokesperson quoted by TechCrunch, they aren’t using the photos to train models, which is probably why they didn’t put that in the dialog.
Spokesperson says they’re not, legally binding terms say they’re allowed to. At the very least you are giving them permission without it being clearly described up front, even if they might not be using that permission at this moment.
This isn’t buried. The user has to see the screen and click accept for their photos to be uploaded.
Compared to the usual buried disclaimers and vague references to “improving services,” consenting to 1000 things when you sign up for an account, this is pretty transparent. If someone is concerned, they at least have a clear opportunity to decline before anything gets uploaded.
It’s just surprising to me that people look at this example of Facebook going out of their way to not do the bad thing and respond with a bunch of comments about how they doing the bad thing.