Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This might also be useful to counter the sometimes line of complaint that Google only runs Project Zero to make their competitors look bad.

While they found some exploitable bugs on the iPhone here, they also found (and reported) that quite a few systems did not seem currently exploitable. It seems clear this is not a hit job, at least to me.



Sorry but it will take more than a blogpost for me to believe it.

When P0 recruits two prominent Chrome hackers (Ned Williamson & Samuel Gross) and make them work directly on iOS, it look like a hit team to me.


If Google's idea of a hit squad is to pay a few hundred thousand dollars worth of salary to have people do your own bug searching for you and give you a few months to fix it before letting anyone know (or until after you've fixed it, if sooner), then sign any company I'm working for up, and I'm sure there are plenty of other companies that would love the attention too.

Apple gets attention from them because a lot of people have apple hardware and software. That means problems there have a higher impact than on some less used product.

Apple is more than welcome to respond with its own researchers finding Android and other software bugs, and we'd all be better off if they did.


I don't think Apple has a problem with this unless it's irresponsible disclosure. They get a service from P0 and they're quite fast at fixing the issue which works to their advantage. When an exploit is published it will always say "Apple already fixed the issue" which works just great for their image with customers.


"Responsible disclosure" is an Orwellian term invented by vendors to coerce security researchers, who have no duty to vendors and are virtually never compensated for their work, to conform to the vendor's own schedule and commercial preferences.

The better term is "coordinated" (or "uncoordinated") disclosure.


Those disclosures do not always impact only the vendors, but can also impact end users, so I don't think this is only about being nice to the vendors (I'm aware that vulnerabilities can be exploited even if they are not disclosed by security researchers - and that disclosing can in some cases benefit the end user - but I guess it makes it much cheaper to since you don't have to hire a team of highly qualified professionals to find them).


The point made is that "Responsible" implies an ethical obligation to tell a vendor before telling the public.

It's not clear at all that this is reasonable, and IMO, this seems to highlight the lack of incentive (or even perverse incentive) for vendors to secure their systems.


Semantics aside I hope the point I was making is clear. By "irresponsible" I meant the regular definition of the word: reckless, or careless. Not following any best practices.

Out of personal curiosity, how is "coordinated" mitigating the issue you mentioned? It eliminates the vagueness of "responsible" but seems a lot more strict for the researchers:

> The primary tenet of coordinated disclosure is that nobody should be informed about a vulnerability until the software vendor gives their permission


Notifying people that their devices are vulnerable to something is never reckless or careless or irresponsible.

Those words all focus on the fact that 'bad guys' also get notified and ignore that users get notified.


Ok, and I agree that "responsible" is not the right word, but let's also acknowledge the asymmetry that often exists between the users and the bad guys.

Telling users that their device or software is vulnerable is not really useful to them unless it comes with a patch. Very few users have the knowledge, skill, or access to directly alter their technology to secure it, based on a vulnerability report. Most don't even know how to be aware of such reports, or even that they should be.

"Cyber bad guys" are more likely to be aware of and able to act upon a vulnerability report, as they have more knowledge than most users, and their entire mode of operation is that they don't wait for access or permission.

This is why "coordinated" disclosure is sometimes better. By announcing the vulnerability with the patch, the asymmetry between users and bad guys is better balanced. Users can be reasonably expected to install patches when notified to do so.

Of course there are all sorts of exceptions, such as when data is actively being leaked or exfiltrated, which users could delete or remove. Or when the security vulnerability affects systems managed by people sophisticated enough to take direct mitigating action, like changing server configuration or cycling keys.


Personally, I think part of what it comes down to is respect for the people in question. "We didn't tell you because statistically you're not likely to do the right thing if anything" shows a lack of respect for people and their ability to determine their perceived best action and follow through with it.

If people really can't be bothered to follow through with what's going on, then they'll offload that responsibility to someone else if it's important enough. We already do that with IT for companies, and Anti-virus for a lot of people at home (as much as I think most those companies focus on the wrong thing)[1]. Adding information to the system allows that market to be more efficient and useful.

1: I would love to live in a world where most the protection was a combination of the OS and application vendors patching their own software, and protection consultant/anti-virus companies that knew what software you ran giving you good information on what you should and should not do on a regular basis of for short periods until something is fixed, etc. I think that's a much more valuable service than "we scan all your incoming and outgoing mail and make your computer so slow and unresponsive you think you need to buy a new one".


"Stop using your phone for sensitive communications" is hugely useful advice.

You're making the assumption that the disclosure in question is the first discovery of the vulnerability.


I think there's a balance to strike here. The fact that security researchers like the Project Zero people don't just publish exploit details and sample codes on day 1 suggests that it might not actually be in the best interest of users to do it. It may actually cause far more damage to the users than giving the manufacturers the standard time to patch. As a matter of fact every such disclosure made "irresponsibly/uncoordinated" on Twitter was universally condemned by security researchers and software providers alike.

Doing something "responsibly" isn't just the prerogative or duty of security researchers. It's a general term which means "putting thought into it".


I think you are wrong about "universally". Probably widely condemned, I doubt it was universal, unless you are drawing convenient definitions where anyone who thought it was fine wouldn't be a security researcher.


Sometimes I do wonder if companies would take security more seriously if the convention was to disclose publicly without delay. My guess is they would instead just pay more for breach insurance, or try harder to shoot the messenger.

EDIT: breach, not beach


Apple should be performing this kind of research themselves on such a high profile target as iOS. If you let a competitor discover stuff like this then you kind of deserved to get "hit".


Only if the guys working on other systems are less talented. If equally talented guys work on other systems then it's not a hit job, and I'm sure google puts its best guys to find holes in google products.


As mentioned in the article, they looked at Android messaging earlier this year: https://googleprojectzero.blogspot.com/2019/03/android-messa...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: