Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> However, how do you award a bounty for addressing "risk of representational harm" due to historical biases not inherent in the model?

I think it's pretty straightforward. Most sensible, considerate humans would avoid cropping an image of a woman to her boobs simply because it's insensitive to do so. Just because the machine is trained to highlight text or other visual features doesn't preclude it from ALSO understanding human concepts which are difficult to express to the computer in a straightforward way.

There are plenty of ways the model can be improved (e.g., not preferring white faces over Black faces), and they're certainly difficult. If they weren't difficult, Twitter would have simply fixed the problem and there wouldn't be a competition. Arguably, though, if the job can reasonably accomplished manually by a human then a computer should be able to do a similarly effective job. Figuring out how is why it's a challenge. And if we can't figure out how, that's another interesting point in the ongoing conversation about ML.



> Just because the machine is trained to highlight text or other visual features doesn't preclude it from ALSO understanding human concepts which are difficult to express to the computer in a straightforward way.

I honestly don't know how you reached this conclusion. You're skirting the line of contradicting yourself.

> Arguably, though, if the job can reasonably accomplished manually by a human then a computer should be able to do a similarly effective job.

I also don't see how this could possibly be true. Certainly the sphere of tasks that computers competently handle compared to humans is growing, but it's nowhere near "any job a human can reasonably do".


> I also don't see how this could possibly be true. Certainly the sphere of tasks that computers competently handle compared to humans is growing, but it's nowhere near "any job a human can reasonably do".

If we admit it's a job that a computer cannot reasonably do, then why do we have a computer doing it in the first place? We shouldn't accept the status quo with "well it's okay-enough" if it has limitations that are significant (and frequent!) enough to cause a large controversy.

In fact, Twitter's response was _to remove the algorithm from production_. The whole point of this challenge is to find out if there's a way to automate this task well. It doesn't have to be perfect, it has to be good enough that the times where it's wrong are not blatant and reproducible, like when this was initially made apparent:

https://www.theguardian.com/technology/2020/sep/21/twitter-a...


> If we admit it's a job that a computer cannot reasonably do, then why do we have a computer doing it in the first place?

That's not what you said. You said if it's a job that can be reasonably done by a human, then a computer should be similarly effective. That's clearly false.

> if it has limitations that are significant (and frequent!) enough to cause a large controversy.

Controversies are not necessarily a sign that a problem is significant. Consider all the "war on Christmas" nonsense.


Cool, then we shouldn't have the computer automatically cropping photos then given that it can't reasonably do it.


Training a ML model to understand how "most sensible, considerate humans" would act is anything but straightforward. I'm not even sure most people would even consider cropping an athlete at the jersey number regardless of race or gender - it just doesn't make sense yet the machine seemed to do it at the same rate for male vs. female. Retrofitting discrimination onto this result only once you learn the label of the data isn't particularly useful. We want to know how to make good non-harmful predictions in the future.


> Training a ML model to understand how "most sensible, considerate humans" would act is anything but straightforward.

This is exactly why it's a challenge! The point is that the goal should be to do it the way a human would find satisfactory and not the way that's easy.

Even if the machine wasn't trained to be biased, the machine should still produce results which people see as good. We didn't invent machines to do jobs just so people could say "but that's a bad result" and reply with "yes but the machine wouldn't know that". We should strive for better.


> Most sensible, considerate humans would avoid cropping an image of a woman to her boobs simply because it's insensitive to do so

The manners of sexualization of the human form, even nudity, is not a human universal, not even a western universal, eg Nordics and Germany. Even though through omission, this move is still overemphasizing the sexuality of breasts, which is basically pushing American cultural sensitivities upon the world.


Yes. Anecdote: An Eastern European friend went to America. Her parents brought her little sister, something like age five, to the pool. The child wore no top. Never occurred to them. Americans were aghast and demanded that she wear some kind of little bikini thing to cover up. The Europeans were confused: "But that just sexualizes her! It suggests she has breasts, which she does not!" Not that they fought it. When in Rome.


You're very much missing the point. The machine obviously isn't intentionally sexualizing anyone, but it's producing a bad result, and not only is it bad, it can be perceived as sexualization (regardless of whether there's bias or not). The machine lacks understanding, producing a bad result, and the bad result is Extra Bad for some people.

Let's say I started a service and designed a machine to produce nice textile patterns for my customers based on their perceived preferences. If the machine started producing some ugly textiles with patterns that could be perceived as swastikas, the answer is not to say "well there are many cultures where the swastika is not a harmful symbol and we never trained the machine on nazi data". The answer is to look at why the machine went in that direction in the first place and change it to not make ugly patterns, and maybe teach it "there are some people who don't like swastikas, maybe avoid making those". It's a machine built to serve humans, and if it's not serving the humans in way that the humans say is good, it should be changed. There's no business loss to having a no-swastika policy, just as there's no business loss that says "don't zoom in on boobs for photos where the boobs aren't the point of the photo".

This problem has _nothing_ to do with sensitivities, it's about teaching the machine to crop images in an intelligent way. Even if you weren't offended by the result of a machine cropping an image in a sexualized way, most folks would agree that cropping the image to the text on a jersey is not the right output of that model. Being offensive to women with American sensibilities (a huge portion of Twitter's users, I might add[0]) is a side effect of the machine doing a crappy job in the first place.

[0] https://www.statista.com/statistics/242606/number-of-active-...


“Badness” is not a property of the object, it is created by the perceiving subject. What AI does is an attempt at scaling the prevention a particular notion of “badness”, that suits its masters. In other words Twitter is just pushing another value judgement to the entire world.

Even the value of “no one should get offended” is subjective, and in my opinion makes a dull, stupid world. Ultimately it is a cultural power play, which is what it is, just don’t try to dress it in ethics.


Badness is indeed a property of the output of this algorithm. A good image crop frames the subject of the photo being cropped to fit nicely in the provided space. A bad image crop zooms in on boobs for no obvious reason, or always prefers showing white faces to Black faces.

You're attempting to suggest that the quality of an image crop cannot be objectively measured. If the cropping algorithm changes the focus or purpose of a photo entirely, it has objectively failed to do its job. It's as simple as that: the algorithm needs to fit a photo in a rectangle, and in doing so its work cannot be perceived as changing the purpose of the photo. Changing a photo from "picture of woman on sports field" to "boobs" is an obvious failure. Changing a photo from "two politicians" to "one white politician" is an obvious failure. The existence of gray area doesn't mean there is not a "correct" or "incorrect".

> Even the value of “no one should get offended” is subjective, and in my opinion makes a dull, stupid world.

You'd agree with the statement "I don't care if my code does something that is by definition racist"?


> If the cropping algorithm changes the focus or purpose of a photo entirely, it has objectively failed to do its job.

You just changed the problem formulation to an objective definition of “purpose” and a delta of deviation that is tolerable. That’s just kicking the can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: