Presumably because how would it differentiate between a legit "already installed" extension with a signature that cannot be verified, and an extension installed by malware that also cannot be verified?
Browsers can only protect against malicious websites and malicious extensions. They can't protect against malware. Even without any cert problems, malware on your machine can modify the browser executable/process to insert whatever code it wants.
With this reduced threat model, it's easy to simply keep existing pre-installed extensions available, and disable updates. Your only problem is if a pre-installed extension is malicious or has a vulnerability, it will remain.
> Presumably because how would it differentiate between a legit "already installed" extension with a signature that cannot be verified, and an extension installed by malware that also cannot be verified?
This is why a signature can also be accompanied by a trusted time stamp which can confirm that the signature was made while the certificate was valid.
This is the common way to sign all Windows software to avoid this exact kind of problem.
Yes, that implies this is a known and solved problem. It’s embarrassing for Mozilla to not have prepared for this.
If an extension was already installed, it passed the signature check at the time of installation. I'm not sure what benefits we get from periodically re-running the exact same check -- particularly when balanced against the risks of the re-checks, which are now obvious.
Personally I despise the idea of the software already on my pc being dependent on signatures stored on a remote server. I installed it and Mozilla can fuck right off. It's my responsibility to police what software is on my computer, not theirs.
We have a technical method to do so, but the site has not been created, that I know of. For example:
* A domain name like ipfs://nytimesmirror.com that mirrors nytimes articles (at the same relative URLs)
* Some kind of crowdsourced process for content curation.
I'm aware that it's not legal, but I wouldn't be against it. Content piracy has been a catalyst for change. I simultaneously would be willing to pay nytimes for access to their content but I'm not willing to sign up to a $15 / month subscription. So if their content was widely pirated using delivery method like IPFS they might be incentivized to problem solve that for me.
A comparison that isn't quite the same on one dimension doesn't invalidate the comparison... The dimensions I was comparing were those of temptation and of "missing out" on some free stuff. Still, not eating free pizza at the tech event take 0 time too. Maybe you'd argue that you have to eat sooner or later and food is already there so you can multitask it but by the same argument for many free X on Craigslist, you'll have to get an X sooner or later, it's already there so you can multitask picking it up while you go do some other errand. Or hey, ignoring Craigslist, look at that decent-looking free chair you're about to drive past going home from work...
Comparison is very accurate, not picking up free junk on craigslist takes 0 time, not eating the free food at the tech event (that you are already going to) takes 0 time.
They couldn't, because Javascript etc are turing complete, so you can't predict if they're going to end (halting problem). With bitcoin you know that the script will end and that the running time will correlate loosely with the size of the script, so you can't really use it to mess up network nodes. Butcoin scripts have no loops or goto.
A guarantee of completion isn't strictly necessary: you can cap the number of ticks/cycles a script is allocated on a deterministic reference machine.
Surely there'd be a number of technical and economic challenges to be overcome, but Bitcoin itself is an example of how a good-enough solution may be hiding in plain sight.
Indeed, you can, but then you have a very exact requirement on exactly how the interpreter is implemented which can never be optimized, and any disagreement between implementations can cause a world ending consensus failure.
By not having looping or recursion script naturally gets an operation limit by virtue of having a size limit, and this is _relatively_ easy to get right between implementations. (Though, so far several of the alt implementations have gotten it wrong).
You could require all scripting to be done in a language supporting total functional programming, like Epigram. That would be fairly hilarious (also counterproductive, because the increased script size would cause blockchain bloat).
I find it suspicious that Java or Scala didn't make the list. Java may not be sexy but it checks many boxes... And I suspect that Scala would have been a serious contender in brevity too.
If I need to install the JVM just to try your software, then it's going to remain untried - however many design patterns you've managed to cram into it.
Because it's too big, it's very slow to start, it's controlled by a company that is not very lovely, it's not installed by default on our system, it's free and opensource without in fact being very libre, it has frequent security flaws that don't seem to be addressed seriously... Some of this is probably only partially true, but it gives you an idea why people don't want to use it, whether they're right or wrong.
So it's a good thing you're propagating partial truths then, right?
The JVM isn't easy to beat for software that's more complicated than "Hello World". Sure you can beat it, but you're going to have to work very, very hard.
The security flaws exist in the area few people care about (and shouldn't even really be installed any more) -- the sandboxing and web start code.
Did you include gcc and friends in your "too big" calculations? Hard to program without them.
It's a crazy moving target that never seems to work right. Even if it does work right, pretty soon some update breaks it. And then there's the fact that most (desktop) Java applications look like arse and run like a dog.
Java aficionados don't seem to realise just how obnoxious many of us find the JVM. If you live with it, and do your work with it, I guess you come to accept it as a fact of life - always there in the background. If literally nothing that you do uses it, then it's a massive extra dependency to add to a simple desktop app - a dependency that is often quite difficult to manage and keep updated.
Re dependency: you probably have a point on Windows, but it's your choice to use an OS without a package manager to keep things up to date. I don't know about Mac but on Linux it is a non issue.
That said it does not do well for desktop apps due to slow startup, high memory use and lack of native toolkit. It does much better as a server side runtime.
Of these, I have only tried Eclipse. While certainly more full-featured out of the box, its speed, especially at startup, is really nothing to boast about when compared to Visual Studio.
I can't say I've had that problem myself. They are quite startup heavy but typical c# console apps start up on an ssd based system in ~150ms and 700ms on rust disks.
If you're calling something thousands of times, fast process startup times are good. This isn't the model windows uses though which is the primary target of c#.
In the tests, his OCaML implementation takes 7ms, and his Python one takes 64 or 109 ms. So even on your SSD, that's 50% to an order of magnitude too slow, just for startup.
I cannot remember exactly, but in the discussions about page load times translating into revenue, 100ms was the number being tossed around, IIRC.
I certainly notice any time I boot something up that requires the JVM, I refuse to use the CLR, so I can't tell you much about my own experience with that.
Once it gets going though it screams. Not sure I want to trade the miniscule difference in startup time for that.
As for page load times, this is silly. There is no startup time on a page at all. Possibly the first hit but there are warm start options for that in CLR at least which make this a complete non issue.
To give you an idea, 98% of our page hits are under 80ms processing time and we have big, heavy pages (we're old school asp.net mostly).
I'm not talking about page load times. A server side app is a great use-case for something that has slow startup. I was just using that as a citation for perceptions on speed and how it can negatively affect experience.
I'm talking about command-line applications, like the one in the article and in your first post. Then you get the startup every single time you run the command.
For stuff like tab completion and for launching small command-line apps (think "grep", "sed", etc), absolutely. A penalty of hundreds of milliseconds for the _launcher_ alone can double (or more) the total execution time.
I use zeroinstall a lot (I'm a contributor), and I turned off tab completion because the lag (of the python implementation) made me wonder if my terminal had locked up, which was far more distracting than trying to remember the available arguments. I have re-enabled it in the ocaml port, because now it's effectively instant.
With ngen and mono-aot it can be compiled to native code directly.
> JVM-based languages would not fare any better.
It is all a matter of which JVM is used. Some of them offer AOT compilation and on disk cache of JITted code from previous runs, thus matching the startup time of native binaries.
Agreed, pretty large omission given that C# was included.
Not such a big fan of Java, but Scala seems to be a strong contender for Java.Next* , and is certainly a viable dynamic-to-static transition language given its terse syntax, deliciously rich collections library, and implicits support (for the MOP fans).
* Twitter being able to withstand spikes of 140K+ tweets per second without lagtime is impressive to say the least.
Yeah, it's a joke that Rust (which I think will be a great language, but all of the syntax hasn't even been decided yet) is being considered, when Java and friends are not.
Really, this article is about "I want to rewrite my program in the newest, coolest language", not about which is the best tool for the job. And that's fine, but the author should present it that way.
I'm sure you meant "newest and/or coolest", or "recently become coolest", but it's worth noting that Haskell is about as old as Python, and OCaml is not much younger.
It's already been said that Java is a very heavy dependency for a simple desktop application. Rust compiles to native code, that's the advantage over Java.
Author here. You're right, "Awk in Haskell" is perhaps a bit misleading. The intention is not to replace Awk for the masses, but mainly to provide an Awk-like tool for programers who are more familiar with Haskell. Similar to pyline: http://code.activestate.com/recipes/437932-pyline-a-grep-lik....
I considered using Vim at the time. The nice thing about doing it all on the command line is that it's repeatable and persistent. I have a huge shell history, and that command will now be in my history for months. I rely so heavily on my history that I'll often remember a related command from a month ago, find it, then modify it for my current use. :)
JSON syntax is Python/Ruby primitive syntax for lists and strings (although the reverse isn't quite true). Why add a library import and additional code on each side just to emit and consume what is effectively the same data? :)
This is my personal list of RSS feeds, which I've been curating for many years, so I knew exactly what data was coming. It was a one-time hack: it got my RSS feed list into the database, and was never excuted again. It was also only for local testing purposes.
I think that it would be pretty crazy to import simplejson on the Python side, write extra code to dump the data to JSON, look up how to do it in Ruby (because I don't know off the top of my head), and add the code to load it from JSON. That effort would not change the outcome by a single bit. For other theoretical data sets, it would, but this is not other data sets.
If this code were ever going to be used again, sure, I'd transfer it in a safe way. But this is the equivalent of `grep`ing and `cut`ing at the Unix shell – the goal is to get data from point A to point B, never to be revisited. I reject the idea that, in this particular case, extra effort to serialize and deserialize JSON data is superior to adding five characters of str().