ISP interference makes for a interesting question regarding liability. Their interference are now potentially causing massive amount of economic damages to companies which relies on jquery library. Webshops, economic infrastructure, hospitals, transportation systems are just a few that relies on having fully functional websites up and running.
But for liability, a few questions pops up. Who is the offended party, and how much negligence is required for software bugs. Does it need to be a civil suit with each customer, or can the website owners sue? If a truck crash right outside a store and disrupt business, the store owner can sue even if they have no formal relation with the truck company.
In that case your fallback isn't going to help - unless you're performing a checksum on the response you get from code.jquery.org.
I know it's probably not considered great practice anymore but I generally just self host js libs. One less thing to go wrong and I often develop without an internet connection. Saves on having to dropback to a fallack every time.
Alternatively, does code.jquery.com offer any kind of uptime guarantee? Big companies would rather rely on their own hosting / CDN than that of jquery, I'm sure.
Even for a small company/project, I've stopped using external CDN.
The gain is not significant enough to balance problems like this one. It's not often than a consumer ISP does this mistake, but that's pretty common for a user behind a corporate proxy.
You link to Google's JS so your users can still get it when your own site is down?
If you hosted it yourself your users would see a working site whenever you were up. Now they will see a working site whenever both you and Google are up. This has not improved effective uptime for anyone. (On the other hand, Google might serve the JS faster, especially if your users cache it. They may also keep their copies more up to date.)
This is entirely optional. It asks you when you first install and connect the router via the WiFi landing page if you want to use any blocking. If you don't answer the question or select no, there is no blocking at all apart form the IWF firewall stuff. Every kind of blocking comes with problems of some description. You either live with it or trust your users.
[1] £7.50/month for ADSL2 for 12 months with line rental and calls included was too good a deal to throw away. I'd go for Andrews & Arnold if I could afford idealism at the moment.
You're missing the point here. Some parents are not tech-savy and thus depend on the controls given to them by their ISP (etc) to help manage the level of access their kids have to the internet. Suggesting they simply turn off the parental controls doesn't actually fix the problem, it just shifts the problem (ie the problem is shifted from sites not functioning properly to kids having access to everything).
While I do agree that parental controls are not foolproof; occasionally legitimate sites get caught up in the blacklist and kids will usually find a way around many blocks, suggesting parents completely disable such censors is a little like throwing the baby out with the bath water.
> Perhaps the parents should spend some time with their children and show them what is appropriate and what is not?
Kids will already know this, the question is whether they still wish to push their luck if they think they can get away with it. If they're anything like me, then they will :)
So I wouldn't trust my kids with an unfiltered internet any more than I'd give them satellite / cable TV in their rooms before a sensible age as I cannot simply assume that they wouldn't watch adult movies / etc, after I've gone to bed.
Sometimes it's better to remove the temptation than to give your kids access to the entirety of the adulthood and hope they're sensible / innocent enough not to abuse that trust. Much like I wouldn't keep junk food in the house if I'm trying to lose weight.
> I have three children, all of whom use the Internet regularly and bar some passive monitoring, I've done nothing. They have healthy browsing habits.
How can you be so sure what your kids do when you're not looking?
> Plus you're going to get goatse'd at some point in your life...
In that case lets also show them "2 Girls 1 Cup" on their 8th birthday; because it's bound to happen one day :p
> Blacklists, censorship, firewalls and control are missing the point...
Actually I do partly agree with this (and the passive monitoring comment I mentioned earlier). As argumentative as I might come across, I do agree that the best form of moderation is to keep the internet restricted to a family PC in a communal area. Not everyone does this though.
Some content might be too violent or scary. Or too sexual. I'm not someone who wraps their kids in cotton wool, but I'd like to protect their innocence until they're ready to watch adult content rather than rush them into adulthood just because the content is out there to be viewed.
(a) Non-techie mundane people have heard about the vast wave of horrible kiddie porn and grooming stalkers just waiting to kill your child. Why would they turn off the only thing protecting their loved ones from harm?[1]
(b) What should you do? Phone up the tech support line and tell them you (essentially) want to look at fisting porn? What if they recognise your voice? What if someone finds out that you want to look at all that dodge extreme porn sites?[1]
[1] I'm not saying this is accurate, just how some people think.
a) I know a lot of parents. Not one actually cares about this. They care more than some nasty piece of work will nick their child's phone on the way home, which is far more realistic and common. I reckon the 1% are noisier than the 99% here.
b) No - see my post. When you connect the router and hit it with the first WiFi connection, it is configured by default to allow everything. At which point you can then tell them you don't want oodles of fisting porn shoved at you or just ignore the question and carry on getting your fix.
The only thing these statements prove is that the noiser people are consistently more likely to be wallies.
Interested Australian user here. That's about half of the cost of our cheaper monthly ADSL2 plans - what's the catch? How far away from the exchange are you?
Well I was an O2 customer. Sky bought O2 last year and in the effort to maintain their customer-base they will give you silly offers if you argue with them for long enough. It's usually £20.50/month including line rental but I argued it down to £7.50/month on the basis I stayed with them for 12 months.
No catch. Confirmed no traffic management, no download limit, free router, free migration. This is purely for customer retention.
I'm 800m away from the exchange and get 12.2Mbits down and 1.1Mbits up.
I pull an average of 120Gb/month over the line with no problems.
Edit: just to add I was paying £35.50/month with O2 before which is crazy amounts.
I was on o2 for 3-4 years before being transferred over to Sky last month. Since transferring, I've had nothing but problems. Online gaming pings have gone from 20 -> 50, download speeds have fallen 20-30%, my connection frequently hangs (Youtube is near unusable), and I now have issues loading websites from time to time (I had to refresh Google 3 times before it loaded earlier). To make things worse, Sky is charging me £15.50 a month while o2 was only charging £7.50.
Thankfully, BT is installing fibre in my area so I should be able to jump over to plus.net sometime soon. I can't wait.
If it's unusable, tell them and say you will stop paying for it as it doesn't live up to their advertising. Your rights are protected outside the contract. Threaten to complain to Ofcom if they resist. They'll let you leave and give you the MAC code.
I'm in London and my exchange has Sky LLU support. Coincidentally I live in the same postcode as Sky HQ so any problems, I'll talk face to face with them as I have some contacts.
I did ring technical support to get the issues fixed last Thursday -- apparently a higher level technician will call me back within '10 working days'. Hmm.
I'm not trapped by a 12 month contract, which is why I'm not too annoyed about the situation. My cabinet will get fibre fitted early next month, so I should only have to endure Sky for another month or two thankfully, which is ok by me.
Sky don't have the best reputation when it comes to issue resolution. I was a former Sky broadband customer and found their customer care and technical teams to be woefully lacking. Additionally, they have no real SLA to speak of, and all of their service plans (as far as I understand) have a standard 1:50 contention ratio. Ouch. And when I was a customer (a couple of years ago), they refused to divulge the the ADSL (PPPoA) RADIUS credentials, making it much more difficult to use your own ADSL router.
For many users (especially your average home/family users with less technical requirements, not using it for critical purposes such as home working) they're probably perfectly adequate.
FWIW, I use BT's (formerly British Telecom) FTTC Infinity for Business product. 78Mbps down, 20Mbps up, worst case 1:20 contention, no caps or allowance-related FUP. Their customer care and technical teams are pretty decent. And I can get a /29 in addition to the dynamic PPP IP for the WAN link. Where I am (NE Scotland), BT own all of the widely available (non-private) infrastructure so it's easier to deal with a single company in the event of a failure vs being pushed between ISP and infrastructure provider. YMMV.
I'm actually using my line for home working 100% of the time. However I have a backup (3G card in my laptop and friendly neighbours with different ISPs including different mode Cable and WiFi). I think that is a better investment than an expensive line to start with simply because regardless of who you are with, the end of the line is with OpenReach who are abysmal. The difference between £100/month and £7.50/month is moot then.
I treat my home connection like a coffee shop's WiFi.
The real meat and two veg of my setup is a hosted desktop I RDP into and a number of hosted Linux machines.
I'd question whether this is a worthwhile fallback as depending on what the error is then the browser may still wait 30 (ish) seconds for the original request to time out.
I'd be interested to see if the initial blocking was part of an automatic system - block everything in this malicious page - and was fixed when the competent people got in to work to see all the complaints and reports?
That can't be true because it would make reputation attacks really easy.
Sucks for the people affected by this, but if the result is that more people use google for this, I'd be a happy bunny.
Why would anyone use code.jquery.com, really? They obviously don't mind a third party hosting their js, so why not use the most popular service (google) to increase the chances that users arrive at their website with jquery already cached?
Because webmasters would rather compromise the security and integrity of their site, and the privacy of their users, than pay for the initial burst of bandwidth and latency for first time visitors. jQuery 2.1.0 production, minified and gzipped is still over 30KiB, compared to this thinkbroadband page which is only 6 KiB (and yes this page uses about half a dozen external js resources, including jQuery)
Perhaps it's about time we had a way to specify the hash of <script> source inline so browsers can serve files from cache even if they are from different origins
> Perhaps it's about time we had a way to specify the hash of <script> source inline so browsers can serve files from cache even if they are from different origins
A spec for just that was recently proposed[0], it even has support for a "canonical" script to be used in the event of that the hash check fails.
The good news is a polyfill for this can probably be created today. If CDNs serve their JS with the proper CORS headers, you can request the JS with cross-domain XHR and check it against a hash before eval()ing the script.
The bad news is that the polyfill would require you to allow `unsafe-eval` if you use Content-Security-Policy headers. Depending on your security model, it'd probably be best to host all your resources yourself. Not to mention that using a hash function written in javascript might negate any performance gains.
The polyfill would need to be in javascript, this is just a candidate spec and isn't actually implemented anywhere yet. Obviously it wouldn't be needed if it was implemented natively. I'm just not sure if it makes any sense to, performance-wise.
This strikes me as something that, without a lot of thought, will expose a user's browsing history (similar to the old "check the visited link color" attacks). Insert script links with a hash, check to see if the browser makes a request.
That was my thought too, the candidate spec for this[0] seems to have taken that into consideration by requiring the scripts to be served with an `Access-Control-Allow-Origin: <origin>` header.
Since the server needs to grant you full cross-origin read permissions to even start the hash check, it's not likely that an attacker could use this to infer more about cross-origin resources than they already can.
That's not hash collision, that's hash preimage attack.
If you can perform hash preimage attack, then faking a JS library is aiming really low.
• You could forge any SSL certificate.
• You could forge any PGP/GPG message (public key crypto is not applied to whole messages, only hashes of them, same with certs).
• You could maliciously modify Git repositories, even those with GPG signed releases like the Linux kernel.
• You could inject malware into any package repository, MITM software updates for all OSes, etc.
Basically security of the entire Internet and all secure software distribution depends on the fact that preimage attack against crypto hashes is impossible (i.e. time and/or energy required to perform a brute-force attack is literally astronomical).
A slightly more sensible approach may be to allow script tags (or any external linking mechanism) to list multiple (trusted) sources, and fallback appropriately.
That certainly feels more inline with how the internet in general was designed.
The domains are obviously trusted to a degree. The objective of the hash is just to allow a content addressed[0] clientside web cache, and avoid talking to them most of the time. Good for privacy, security and load times.
Tracking via 3rd party cookies and E-Tags, plus HTTP Referers as codeonfire said. There's also just the obvious risk of compromise, resulting in millions of sites running malicious code.
You have to do that at the end-user level if your goal is truly shielding yourself from Google. Wagging your finger at a few of the devs on a news website isn't going to change the millions of websites using client AND server-side Google Analytics, not to mention ads.
> I want to know what privacy concerns you have regarding static CDNs that you don't implicitly give up by accessing the Internet.
implies that the privacy concerns for 3rd party JS library CDNs are null. They are only null if you don't already block or misdirect their other tracking methods.
Well let's assume that each URL or domain that is blocked is reviewed by a human who's paid to look at content, not necessarily to have technical knowledge.
They'd be much more likely to know that blocking anything from Google is probably a bad idea/false positive but they've probably never heard of jQuery and when they look at the jquery*.js files all they see is The Matrix.
So if you allow the above assumption, it's less likely that using Google's CDN would present this problem.
Now that the secret's out, that google won't be blocked, expect the next wave of malware to be served from drive.google.com public links.
There might, just possibly, be a meta lesson here that whack-a-mole blocking doesn't work and is basically a lost cause / waste of time. Try solving the problem another way.
On the other hand, its excellent security theater operating perfectly. Sometimes security is inconvenient, therefore anything inconvenient must be secure, therefore this is great PR.
I believe that countries that block Google block all their domains, including their CDN. This was actually an issue once when one of my "projects" was used by co-workers in China.
Regardless of which CDN you use, having a fallback is a must (see rmrfrmrf's post). Also, if you are making a website (as opposed to a web app), it really shouldn't "break" without JavaScript/jQuery.
Well let's assume that each URL or domain that is blocked is reviewed by a human who's paid to look at content, not necessarily to have technical knowledge.
That's unlikely. Some parental control filters from ISPs were blocking big name childrens charities like Childline. If they blocked that, then that shows they are very incompetent
FWIW my company (MaxCDN) provides the CDN service for code.jquery.com. We are currently investigating this issue with Sky.
Edit: Just saw this: "It appears that the jquery CDN is unblocked once more on Sky connections where the phishing filter that is part of the parental controls system was enabled."
We will continue to work with Sky to prevent this from happening in the future.
Reminds me of some article I read about having some fallback mechanism for when whatever CDN is serving up your JS fails. "Who controls downtime? You do."
But for liability, a few questions pops up. Who is the offended party, and how much negligence is required for software bugs. Does it need to be a civil suit with each customer, or can the website owners sue? If a truck crash right outside a store and disrupt business, the store owner can sue even if they have no formal relation with the truck company.