Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>"It is hard to imagine a realistic threat scenario where MRI's are altered by an adversary, _before_ they are fed into a CNN."

what about when people in the hospital who have a patient that they suspect has cancer use the best machine to create that patients scans and tend to push patients that they think are ok to the older less good instrument? Or if they choose to utilise time on the best instrument for children?

What about when the MRI's done at night are done by one technician who uses a slightly different process from the technicians who created the MRI data set?

At the very least there is a significant risk of systematic error being introduced by these kind of bias, and as you say, it's really hard to guard against this, but if a classifier that I produce is used where this happens and people die... Well, whatever I feel I would be responsible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: