To get this reporting to work, you need an HTTP header in place.
Historically (fairly short term history) this was "report-uri"; hence the name of their product... however this has since been replaced by "report-to". More information on these headers can be found here: https://developer.mozilla.org/en-US/docs/Glossary/Reporting_... (at time of writing the `report-to` documentation is pending)...
Steps:
- User's browser loads your page
- User's browser detects something which would be a violation of your CSP policies
- User's browser blocks that content...
- ...and also checks for a Report-To, Report-URI, or CSP-Report header in the HTTP Headers.
- If any of those headers exist, the user's browser makes an http post to the stated URL,
passing information about the problem.
- If the URL stated in those headers was the Report-URI.com service's URL (i.e. the service which
Troy Hunt's writing about) then their company receives this data, and can use the information
in that to determine that this report relates to your website (i.e. from the info in the
`document-uri` field), and store this information in the metrics they provide for your site.
ps. You could implement your own functionality to collect these reports. The Report-URI product (as opposed to header) is just a pre-written service to save you rolling your own. You just need to build something to receive the HTTP POST statements and do something with the data. Or you look on GitHub and find someone's done that for you: https://github.com/seek-oss/csp-server (and more: https://github.com/search?utf8=%E2%9C%93&q=csp+report-to).
I don't know what's in place (if anything) to prevent sending fake reports... i.e. presumably a hacker could read these headers from a site, then send HTTP POST messages reporting all kinds of errors from various IPs (e.g. if they have access to compromised machines), flooding your useful metrics with false data, thus hiding any useful information in there...
Better than the current protocol, you'd have your site generate an ID for every legitimate request so that any reports could be tied back to that... but even then any hacker would just need to call your site once per false report to get that ID, so this only offers a small amount of additional protection (though in doing so increases the probability of the site being targeted by a flooding / DoS attack).
I haven't opened the ReportURI webpage, but I would assume their value added compared to DIY is exactly in spam-filtering and solving (or mitigating) all the security issues that report-uri (the header) inserts itself.
Call me cynical, but I believe reporting causes more harm than good, by exposing new attack surface.
Why not just deploy a crawler that detects CSP errors and reports them in a static report for the site owner?
CSP violations (such as XSS vulnerabilities) can manifest themselves on non-public pages, where they'd be difficult to detect with a crawler.
In addition, the Web Reporting API covers more than just CSP. It can also report various types of certificate errors, such as those triggered by violations of the Expect-CT and Expect-Staple headers (such violations might occur if your users are under attack by a MITM).
I'm not certain what the implementation status is in various browsers, but the relevant RFCs (e.g. for HPKP) typically recommend that user agents retry the submission of reports. The report URIs themselves may also use HPKP to prevent them from being intercepted (as opposed to just DoSing the submission). There are certainly scenarios where an attacker can only temporarily MitM the victim and the reporting mechanism would still be of use eventually. The reporting itself is not the enforcing mechanism, so timely submission is not the most important thing in the event of an attack.
That said, it's true that the biggest practical use people get out of report-uri is to test the roll-out of these headers and to detect issues they might cause.
Good point on ReportURI having better abilities to detect false reports. It's not mentioned (at least, not on the front page; I've not delved), but definitely their larger dataset will make it much simpler to blacklist IPs suspected of sending faked reports / spotting patterns to remove false data.
I believe the benefit of having users' browsers report this over a crawler is for scenarios where pages attempt to display your content from another site (e.g. in an iframe behind an overlay for a click-jacking attack). You'd never know to monitor that URL / wouldn't know that the site was hosting your content from any of your metrics; but the users browsers would report it.
In terms of "why report it if they know to block it anyway", I believe the idea is to improve security for others; i.e. we're no longer relying on users having the latest browser to be protected; so long as one user had a browser good enough to spot the issue, we can be made aware that there's a risk out there.
To the extent that this is an issue, the server could presumably sign the document uri plus some nonce and include that signature and the nonce in the report-to uri.
A service like Report URI could trivially validate if the nonce approach were understood.
I know and I completely disagree with that decision.
HPKP has a place. Checking CA logs (via Expect-CT header) is bullshit because when the stakes are high enough some CA can get hacked or some low level sysadmin can verify domain control somehow and register a cert and some actor can MITM connections without anyone noticing for months.
With HPKP the fucking page doesn't load. Period. You need to either root the server or use a PDA to stop HTTPS in the first place, but then the browser bar isn't green. If the costs of getting MITMed are high it is better to risk lockout than it is to risk silent data loss.
At the very, very least I should be able to pin which CAs I trust so the threat vector doesn't include every CA in the world.
The Expect-CT header is good enough for most people, but HPKP should be supported if needed.
This always seemed backwards to me. If the worry is that certs are being misissued, why on earth are we assuming that such CAs are following the rulebook?
No, the browser should check the CAA record and refuse to trust certs issued by the wrong CA.
This would only make sense under the assumption that the DNS response can be trusted. For the vast majority of domains and resolvers, that would not be the case. (I'm skipping the discussion of whether you'd want to trust DNSSEC at all.)
The idea you're describing has been standardized as DANE, which has failed to gain any adoption. It would not make sense for CAA to try to do the same thing. Instead, it set out to provide a defense-in-depth mechanism for certain CA vulnerabilities that fall short of a full compromise.
I hate this, too, and would love to be able to use HPKP in the future, too.
But I’ve long accepted that Google will simply drop functionality without reason even when people still depend on it, just because they can, and that they’re deaf to all complaints.
Steps: