Without PNaCl, this feels nothing but regressive, for at least three reasons:
1) JS is awesome precisely because of the common denominator. But because Chrome said so, devs should go back to triple-testing everything and dusting off their assembly debugger toolchains?
2) With more and more happening on cloud servers, JS engines being mature, and clients just getting faster, the need for that ultimate level of performance feels ever more irrelevant.
3) ActiveX long ago taught us several reasons for why auto-downloading and running native code is a bad idea. And even if Google manages to keep their implementation airtight and standards-conforming, what happens when NaCl takes off and Microsoft takes a crack at it?
For gaming this is a big thing. One thing that limits cool 3D Games on the Web is the lack of good engine technology that can be used without plugins. Unity 3D has shown in cooperation with google that they could run without a plugin through NaCl.
I'm really looking forward to this possibility, making the Chrome Webstore as a sales platform even more interesting in the long run.
1) Only if speeds matters. There are a lot of cases where it dosen't. You can now do this when it does. Also we are talking C/C++ (but really anything that can be compiled to binary code) which is not even close to assembly.
2) For the problems we have today you may be right, but you forget the class of problems that couldn't be done the old way. We can now do them.
3) People will learn not to use MS products? But in all seriousness that will be MS problem.
Anyway it is open source, I doubt Google has chosen a license which MS would object to.
You don't _have_ to use NaCl just because it is available. If you find performance like an order of magnitude away from where you need it to be, NaCl could be one technology to consider. Demanding games come to mind, harvesting spare computing power is another possibility, I'm personally interested in high performance programmable audio (think Max/MSP/Jitter within the browser), etc.
> 1) JS is awesome precisely because of the common denominator. But because Chrome said so, devs should go back to triple-testing everything and dusting off their assembly debugger toolchains?
If you absolutely need the flexibility or speed that NaCl offers, then by definition you can't get by with just V8 and HTML5. If you can get by with V8+HTML5, and NaCl offers a speed advantage, a JIT compiler will allow you to have the common denominator as well as the improved speed. If you can get by with V8+HTML5 and NaCl isn't a speed advantage, well....
> 2) With more and more happening on cloud servers, JS engines being mature, and clients just getting faster, the need for that ultimate level of performance feels ever more irrelevant.
But it's always nice to have more speed anywhere. If NaCl is faster than V8, a JIT compiler will allow you -- again -- to have your common denominator and a higher level of performance.
3) ActiveX long ago taught us several reasons for why auto-downloading and running native code is a bad idea. And even if Google manages to keep their implementation airtight and standards-conforming, what happens when NaCl takes off and Microsoft takes a crack at it?
Lets assume that implementations can be made secure with enough time and money, two things that this project is endowed with. I presume you're implying that Microsoft would mess up the implementation, making it a nightmare for devs everywhere. With browser shares being as egalitarian as they are, a JIT compiler would put the onus of a correct implementation on the vendor.
> If you absolutely need the flexibility or speed that NaCl offers, then by definition you can't get by with just V8 and HTML5.
Yes, and in that case perhaps you should not be creating a web app anymore. The web is not the only platform for software development and nor should it be in my opinion.
I find it a bit amusing, albeit rather sad, that you got downvoted for saying that. You're exactly right, of course... the web (that is, html/css/js over http) is NOT the only way to distribute apps, and it isn't appropriate for everything.
Unfortunately, this war has been lost, in the minds of developers in general. Nearly everyone seems to think that the web-browser is a modern day X server, and that it's The One True Way to remote out UIs or to distribute apps.
I do wish people would look beyond this and consider some other options and invest some time and energy on some of the other choices, but it just ain't happening for the most part.
You really want every web page to be shipping their own JIT (or forcing users to grab it from a CDN)? JavaScript JITs are huge.
Also, the idea that you can just ship a JIT in Native Client and have the performance not hugely regress is unproven. Modern JavaScript JITs are fundamentally based around self-modifying code (look up how PICs work). NaCl penalizes self-modifying code by requiring that it run through a verifier. It would require a lot of engineering effort just to avoid huge JS regressions (like this 2.5x slowdown from a year ago [1]); why not spend that manpower to improve JS engines?
> the need for that ultimate level of performance feels ever more irrelevant
Nah, I don't know. As a user I'm pretty much pissed off when my multicore machine is hitting 100% on 3 of its cores when browsing some blog.
> ActiveX long ago taught us several reasons for why auto-downloading and running native code is a bad idea.
It's not much different to downloading and running JS. Heap Spraying attacks are the simplest you can run against a JS vm. Then there's more advanced attacks that abuse the JITs nowadays found in VMs to generate malicious native code. And I bet not many JS VMs check their generated code for sanity - which NaCl on the other hand does.
> Nah, I don't know. As a user I'm pretty much pissed off when my multicore machine is hitting 100% on 3 of its cores when browsing some blog.
That's due to shitty coding by the front-end developer of the blog. With power comes responsibility; writing in C is a lot of power but there are far more opportunities to write shitty code.
You don't want JavaScript VMs to be running NaCl's verifier on all the code that they generate. That would be a significant performance loss. Remember that JavaScript engines are fundamentally built around self-modifying code, so the verifier would be running all the time.
This, along with Chrome Web Store, are the first major deviances from Chrome the HTML5 browser into Chrome the platform. It's exciting but also unfortunate that we're starting to see a divergence here, when other browsers are unlikely to pick up NaCl for some time (if at all) and the Chrome Web Store being entirely incompatible.
ChromeOS should be added to your list too. When Chromebooks get VPN...will be seriously tempting for a lot of businesses.
Also: this is a dumb thought, perhaps, but was just thinking about why browser sandboxing is so important.
Obviously (on a non-Chromebook at least) people can download and run any app locally, with the download itself often occurring through a web browser.
So why is sandboxing important?
I guess the issue is that you are downloading and running a full app with each page request, and often this download/execute cycle can occur on a script tag that is never explicitly approved by the user, so the frequency and invisibility of executing foreign code is much greater.
This is doubly true if you're talking about a Chromebook, whose administration and security features rest upon the fact that someone can't install stuff locally.
Also, the model of "trust every NaCl executable before you run it" breaks down in this environment...because Google depends on the fact that people must be able to trust clicking links they've never seen before [i.e. that they just searched for].
So it's really extremely important to their bottom line that they nail any security issues with NaCl. This may be different from MS's perception of ActiveX security as a priority.
Sandboxing is important on normal desktop browsers because downloading native apps off the internet and running them is incredibly unsafe for normal users. The more that can run in the browser's sandbox, the better for users.
As for comparisons to ActiveX, they are completely different. ActiveX was not designed to place any restrictions on the downloaded code at all. In contrast NaCL is designed to completely restrict the downloaded code.
Yes, there can be implementation bugs, just as there are implementation bugs in browsers. But it's a really important difference to say that full local privileges in NaCL is a _bug_ that will be fixed. Whereas full local privileges in ActiveX is a _feature_ that will not be fixed.
I think they might also see NaCl as helping drive Chrome's growth, when suddenly a browser can do cool stuff that other browsers can't.
EDIT: I'm old enough to remember the browser wars, but I really like most of the trends that Google/Chrome have introduced, from auto-updates to revving the JS engine to fallback-capable SPDY.
ActiveX's downfall was a lack of security and relatively high complexity (the two are often related). Google, so far, has had quite a good run in those areas.
No, ActiveX's downfall was that it wasn't part of the open-web. If ActiveX had bullet-proof security, it still would have failed because you'd still only have IE supporting it. In NaCl's case, Chrome will be the only browser to ever support it, which means it is just a nicer flavor of ActiveX, and likewise, I hope it lands flat on its face.
Native Client is completely open: the executable format is open and the source code is open. Right now Native Client is in its early stages, so it's premature to consider Native Client for standardization.
If it lives up to its promise, and Chrome is the only browser to support it, then it says more about the other browser vendors than anything.
Just because something is a standard in no way means one way or another that browsers should actually support it. For example... C# is a non-proprietary approved spec by ECMA, but you don't see that inside of a web browser.
If something is standardized and approved by the W3C or WHATWG, and no other browsers support it, then yeah, come back and we'll chat. But that's not going to happen anytime soon. Until that happens, it shouldn't be inside of a web browser and promoted as web technology.
WHATWG is a sock puppet for the browser vendors. So saying 'the browser vendors ought to support something only if WHATWG approves it' would effectively be to say that 'the browser vendors should support something only if the browser vendors collectively want to support it'. That's obviously an argument one could make, but it would be better if one started by dropping the fiction that WHATWG is some sort of neutral party or independent source of legitimacy.
In fairness, the argument you actually made is 'the browser vendors ought to support something only if WHATWG or W3C approves it', which expands to 'the browser vendors should support something only if the browser vendors collectively want to support it, or if W3C approves it'. But that leads to obvious questions. Why is W3C approval critical for technologies of which the browser vendors collectively disapprove, but of distinctly secondary importance for the technologies they collectively like? And when was the last time the browser vendors shown much sign of meekly accepting W3C-approved technologies of which they disapprove? It seems very much as if the WHATWG HTML coup was not only about adopting features which the browser vendors wanted but which W3C was slow to standardise, but also about stymieing features which the W3C approved of but which the brower vendors don't like (like RDFa, namespaces, and HTML modularisation). So setting the W3C up as an alternative gatekeeper here seems almost as cheeky as advancing the WHATWG.
The WHATWG strikes me as exactly what a standards org is supposed to be: enough vendors to form an industry consensus get together and agree to cooperate on certain things for their collective benefit. Or is that the point you were making?
WHATWG is certainly efficient at advancing the collective interests of the browser vendors. The problem arises when those interests conflict with the interests of the rest of humanity.
Yes. For example it may say that other browser vendors care about things other than x86/x86-64 and ARM (which are all Chrome cares about; it doesn't run on any other architectures).
Of course if you _want_ use of the Web to be tied to particular hardware architectures, then Chrome's push with NaCl is ok. But some people and organizations consider such ties to be a really bad idea.
ActiveX was a threat to the open web not because it used native code but because it depended on the Windows API, which was enormous and deeply coupled to the architecture of Windows. For this reason, implementing ActiveX on other platforms was not even an option.
It's the same story with every technology Microsoft tried to embrace, extend, and extinguish. It was Windows and its API that gave them the power, not any programming language or compile target.
In light of that, NaCl is not at all the same beast as ActiveX. It doesn't matter that nobody else is implementing it now, as long as they can implement it, should it one day become strategically important.
If Google has a secret weapon equivalent to the Windows API, it's their suite of cloud services. When you catch them using their browser to lever people onto Gmail and G+, that's when it's time to cry foul.
How is depending on the x86 and ARM hardware architectures significantly better than depending on the WinAPI? It still locks out some people who can currently use the Web just fine from using content just because of the computer hardware (or OS in the case of WinAPI) they've chosen to use. This is a still a bad thing, just like 10 years ago.
The difference is that NaCl applications can be trivially recompiled for a new architecture. There is a water-tight abstraction layer there. Windows programs, in general, can't be trivially ported to another OS because they are tightly coupled to Windows all the way up the technology stack.
That said, I wouldn't trust web developers to promptly recompile their apps for new architectures, so the LLVM approach is greatly preferred.
NaCl is not going to have the same ubiquity as the web proper. It's a way to deliver applications over the web, not web sites.
> The difference is that NaCl applications can be trivially
> recompiled for a new architecture.
Except they can't, in many cases. Recompiling for a new architecture is rarely trivial.
More importantly, there would be no impetus to recompile for a new arch with low market share, so its users would be just as locked out of NaCl stuff as if it _were_ trivial to recompile.
> so the LLVM approach is greatly preferred.
Except LLVM isn't architecture-agnostic, in general.... just doing LLVM lets you abstract away some aspects of architectures, but by no means all.
> It's a way to deliver applications over the web
Sure. But it would sure suck if you had to get Google to recompile gmail for your new architecture in order for your users to have access to it, as opposed to just writing a JS interpreter or JIT for your architecture!
And gmail and its ilk are _exactly_ what NaCl is trying to target.
"Chrome will be the only browser to ever support it"
Doesn't have to be that way It's only supported in Chrome because Mozilla is refusing to implement it for not
other reason than... I really don't know what their reasons are. There is even a plugin for firefox. Google has accepted a ton of Mozilla invention even killing some initiatives like O3D in favor of WebGL. And plenty of other such bridging moves. I'm not seeing the love from the other side but I might be reading the situation wrong. I have no internal info as too how the debate went over NaCL.
Also, NaCL is being use by Google for some scientific computing stuff where they put scientific code written by external scientist and run it in Google datacenters. I'm starting to think this could be kickass for cloud apps.
Please explain to me why it should die?
Now to be fair, Mozilla is going to have something similar in potentially WebCL. But it's not exactly the same thing but close.
...Mozilla is refusing to implement it for not other reason than... I really don't know what their reasons are.
Probably because NaCl blasts way beyond a leaky abstraction into hard-coded dependency on a specific processor. Mozilla (I'm guessing) wants the web to be a leak-proof abstraction. There are half a dozen different architectures on which one may want to run Firefox (x86/x86-64, PPC, ARM, Sparc, MIPS, etc.). NaCl requires in-depth security analysis and implementation for each platform, plus every web developer would have to compile and test a version of their code for every platform (or, more likely, they'll just support x86, maybe ARM).
PNaCl using LLVM would be slightly better, but WebCL is probably the best approach to number crunching in web apps. Security is also easier at a higher level of abstraction.
Surely you mean x86. Firefox Mobile for Android (ARM) has been out for months. And x86_64 builds are already in existence and are slated for Firefox 8.
Firefox has x86, x86-64, and ARM as tier-1 architectures.
It has a whole bunch of tier-2 architectures that are supported but to a lesser extent (e.g. a patch that breaks one of them doesn't automatically get backed out immediately). People are shipping and using Firefox on various of those architectures, including JS jits on at least Sparc and PPC.
XMLHttpRequest was pioneered by Microsoft Internet Explorer 5 in 1999. (Notably, it was in the form of an ActiveX control.) Mozilla first shipped an emulation in December 2000.
Popular ajax web applications were shipped as early as 2000 (Outlook Web Access) and 2002 (Oddpost). The term "ajax" was coined in Feb 2005, around the time of Flickr (2004), Gmail (2004), and Google Maps (2005). But of course the coining of term was a recognition of something that was already happening, not the spark.
The w3c standardized XHR in April 2006, but I think WhatWG may have spec'd it earlier. I can't find the history.
The same thing happened with the DOM, though much quicker. Something you'd recognize as the modern DOM was first shipped by IE 4, in 1997. The first bits of it were standardized by DOM level 1 in late 1998, with most of the rest coming from DOM level 2 in late 2000. The first Netscape implementation was shipped in 2000 (Netscape 6.0).
My point here is that many successful web technologies began just as NaCL is beginning. Technologies have frequently become part of "the open web" through this exact process.
While I believe NaCl is radically different in nature compared to the DOM or XHR additions, regardless, web standards development is a much different game today than it was 5, let alone 10 years ago. We aren't talking about Apple experimenting with some "webkit-" CSS additions, this is something that will essentially use HTTP to deliver C/C++ application code to computers. It's a big deal if it is to become part of the Open Web, and NativeClient has received zero interest thus far from other browser makers. Because of that, it will likely never be a WHATWG approved spec, and this is no secret to Google.
The point of my post that you originally commented to was that web technology needs to be supported, cross-browser, to gain a foothold. NaCl will not see that, yet Google is actively promoting it as part of the web (see announcement blog post).
It is a bit unfortunate to see divergence only because it would mean the reversion of platform-independent development that browsers currently allow.
At the same time though, sacrificing progress for standardization's sake is most unwanted. It is inevitable that some companies will make some superior innovation, and other companies might not want to adapt that right away, or perhaps they'd like to build something even better. But that's okay.
This kind of leapfrog might be hard for developers because they will have to have different branches of their product, but it is a net win for users because the ones on the "superior" platforms get access to the best technology around and it therefore places pressure on the competition to adopt this new superior technology or move ahead with something better of their own.
Is there really a divergence though? Or is this just another way to run a program?
Relatedly, I think others in this thread are forgetting why exactly ActiveX was not a good thing.
To me, this is more analogous to XUL: an open-specced, open-source way to do something new in a browser that no other browser vendor ever implements. XUL didn't hurt the web (though it certainly still gives Firefox addons a competitive advantage), and -- at least as long as native client isn't adopted by other vendors -- I don't really see this hurting the web either.
The danger here is that we'll see Google lessen its focus on the speed and efficiency of V8 and instead start falling back on NaCl in their own apps. We seem to be at a particularly dangerous juncture right now where all the major desktop OSes are running on x86 processors. Thus a scheme like NaCl can easily take off and then, once we are dependent on it, it can lock us in to specific processors and platforms instead of the dream of the web - running anything anywhere.
Unlikely. First, thanks to mobile, x86 is far less dominant than it was even a few years ago.
Second, Google can't afford to deliver a second-class experience to users not using Chrome. They don't have the kind of market share it would take to get away with that.
I don't know the particulars of how this is implemented, but you would either have to compile your code for ARM (adding an enormous burden to web development), or have all mobile browsers ship an x86 emulator (which is a massive step backwards).
Google has in fact announced that they will more or less stop V8 optimization work... Which is too bad, because V8 is nowhere close to as fast as JS engined will get.
How would precompiled JavaScript help? NaCl provides a subset of the processor's capabilities. V8 has access to everything, and is already compiling Javascript to native code. The only way you could exceed V8's performance is by writing a better JavaScript compiler than V8 -- no small task. Anything you can do with JavaScript-[as-yet-unwritten compiler]-NaCl, Google can do with JavaScript-V8-native code.
Could someone more familiar with the effort comment on whether Google realistically solved the security and trust problems inherent in the approach? I'm assuming that
while(1) fork();
is somehow blocked, but native code is a bonanza of attack vectors.
The Native Client paper[1] was published in 2009. The main reason it took so long to be officially released is that it underwent a lot of scrutiny by various security researchers[2], through the Google bounty program. In fact, NaCl has been shipping with Chrome all along; it just needed a flag to enable it (--enable-nacl).
You should read their research paper for interesting insights.
This is a huge win for developer choice on one front -- choice of client language. Now you can use pretty much any language you want to write your client logic as long as a suitable binding to the DOM via the Pepper API has been developed. Not to mention the ability to write truly performant code (V8 is nice and all but you can't beat native code in tight loops -- media manipulation apps like Photoshop come to mind.)
Mono 2.10 already ships with support for NaCl, so you can already use a whole bunch of nice languages over the CLI that way. It'll be interesting to see where this takes the Moonlight project though, since it has similar goals. With Microsoft seemingly downplaying or abandoning Silverlight, so is this the end of the line?
Mono with support for NaCL sounds interesting. Does that mean though the browser will have to download Mono every time it encounters a web app that's written in e.g. C#? Or will it only have to download the C# bytecode?
On a slight tangent, I've always wondered why Java (or other JVM languages) couldn't make a comeback on the client side. Instead of having Applets that run sort of isolated from the browser, you could just expose the DOM.
But you can. https://bugzilla.mozilla.org/show_bug.cgi?id=618692#c1 has a simple example where Java beats gcc -O3 by a factor of 1.5 or so on exactly such a loop. This is not an isolated case, and I fully expect JS JITs to get there for many situations. Unless browsers stop improving them, of course.
I was somewhat irritated when Microsoft was pulling stuff like this with IE. But on the other hand, if they ever do make PNaCl, I will gladly disregard every other browser and target all my personal projects specifically for Chrome. The unfortunate thing is that I haven't seen any news or heard of any progress being made on PNaCl in a long time.
Indeed - having different compiled versions of your code go over the wire to the client simply because they're on a different architecture seems so... backward.
We need to use a specific version of NaCl SDK for each Chrome version. See http://http://code.google.com/chrome/nativeclient/docs/runni...
Knowing how quickly the Chrome version changes and automatically most of the times, it will be very difficult to keep up.
I agree that people consider anything that appears in a web browser is a "web app", but that doesn't make it right. ActiveX, Java Applets, Silverlight, Flash, and NaCL are not "web apps." People also think "Internet" is analogous to "World Wide Web."
A "web app" is something that is built using open standards that has oversight by a standards committee that is either the W3C or WHATWG.
Feel free to peruse through WHATWG's specification for "Web Applications", or "Web Apps" for short. There is a standardized, very explicit definition of what that term means.
Is it a purely definitional argument about what constitutes a "web app"? That raises the question of what exactly "the web" is and who gets to decide, on what basis - but more importantly, who cares? Let's all adopt Native Client apps in the place of web apps and call them something else.
Is it a "standards" argument against NaCl adoption, ie. "Mr. SOCKWG says No!"?
Or is it an argument on technical merit against NaCl adoption? If so, you'll need to spell it out in more detail. Appealing to "Web = Good" is a fair enough starting point, because the Web is Good, but it's not enough here. NaCl advocates (or some of 'em) are presenting an argument about why the Web is good, and how it (or some Web' that isn't the Web - again, definitions in themselves are nothing) could be better. To simply repeat "Web = Good" in reply would be make of the Web a cargo cult - and, one can't help noticing, a cargo cult which just happens to operate very much in the interests of the shrine guardians.
Ouch. I start to think of Microsoft and Apple. :( So far Google been outpacing my expectations on driving web technologies forward - all this for later go back to the "native client" approach Microsoft had in late 90s (thy stopp believing in HTML/JS). Does this mean MS was right back in '99? I'm really confused about Google's vision of how the web will work in 10 years from now. Anyone with insight in there?
.Net and Silverlight, were they open and unencumbered standards not exclusively implemented by Microsoft, would be better than NaCl because .net uses a platform-independent bytecode, which avoids leaking the processor type through the abstraction of the browser.
Mono isn't exactly at par either, there's always a fairly substantial lag time, and only a subset of the .Net stack is standardized. ASP.NET and Windows Forms are notably absent from ECMA standards, as well as from Microsoft's "community promise" with regard to patents.
First of all, we're talking about the VM here, so why does not having ASP.NET or Windows Forms make any difference? Better yet, would you even want your browser to come with them? Also, although the VM bytecode does change with time, it doesn't change as much as the .NET libraries. I'm saying this because the 'lag time' argument isn't all that significant for this reason.
But outside of that, "substantial lag time"? In my opinion the mono team has always been exceptionally fast when it comes to implementing new features in the .NET world. Plus mono has its own tool sets and features that aren't in the regular .NET framework and wont be any time soon.
I am not saying that Mono is the VM I'd like browsers to implement, but my reasons are definitely not related to it not being able to come with the ASP.NET framework.
No support is a bit better than sub standard support, although I guess devs could still have to deal with people wanting a NaCl version created in addition to normal web part of a project.
ActiveX was also the first thing that popped into my mind. However, the whole ActiveX-lesson learnt everyone, not just Microsoft, how bad things can go if the security issues isn't solved when releasing new technology and I think Google has spent a lot of resources on it.
My main concern is how this is going to affect us developers in, let say, next five years. Are we going to need to write even more browser specific code when other browsers also will start to support these native clients?
what you just said basically:
if ms implemented it, its wrong etc we all know that
if google does the same, it must have been done better and therefore its ok
the point tho: the design is broken security wise. google, ms, apple, whoever else implements it, its going to be exploited, heavily specially as soon as chrome becomes n#1 browser (which will happen no matter what - certainly the no-opt-out install with a zillion of popular apps helps with that)
One has to remember that when ActiveX was introduced in Internet Explorer 3.0 the focus on security was a lot lower than in today's browser development.
I hope this trend will stop this instant. Why should we develop a Chrome App which only runs in Chrome (cross platform though) ? With existing technologies you can target way more people with all kind of browsers. (Ok there are browsers which are hard to optimize for)
I feel like we are going back from all these benefits the browserS gave us.
You don't have to use it. I am really confused by the sentiment in this thread. If you want your web app to be cross platform then don't use nacl. You will be no worse off than you were yesterday before having read this article.
One must be very careful making statements like this. The only reason you're not seeing CPU-constrained webapps is because they aren't being created, because they would be constrained by the CPU. For example, I'm writing an app that would be significantly better if I could do fast client-side processing of data. Eventually I plan to use WebGL shaders, but I don't know if GLSL shaders can actually do what I need to do.
This could, at least in theory, make it possible to deliver on the web a serious competitor to Photoshop, for example. I don't see that happening any time soon in Javascript.
Opposite. More like FFI/JNI to Bosch/Qemu. Android NDK, and other foreign-function interfaces are a way to de-secure & de-manage secure code: a controlled mechanism for stepping out of the sandbox. Google NaCL, like other virtualizers, is a way to contain inherently insecure code into a (well?) defined sandbox.
Good news for consumers. Horrible news for developers. OK.. let's see, we gotta make an iPhone app, an Android app, a web app, one for the Blackberry, maybe one for Windows Phone, and ooh.. a Facebook app! Now a Chrome app..
NaCl isn't exactly cross-platform. At the moment it only works on 3 hardware architectures at most, and that only if you put in some special effort....
Wow one of the reasons for 3rd party browsers has now come full-circle (activex).
This combined with how Chrome like to hide things from users so they don't worry their little minds about it (status bar, url bar http, etc.) is a major disaster waiting to happen.
1) JS is awesome precisely because of the common denominator. But because Chrome said so, devs should go back to triple-testing everything and dusting off their assembly debugger toolchains?
2) With more and more happening on cloud servers, JS engines being mature, and clients just getting faster, the need for that ultimate level of performance feels ever more irrelevant.
3) ActiveX long ago taught us several reasons for why auto-downloading and running native code is a bad idea. And even if Google manages to keep their implementation airtight and standards-conforming, what happens when NaCl takes off and Microsoft takes a crack at it?