> In our benchmarks, the Clojure version underperformed by about 9-13% when comparing peak throughput.
The performance difference between Clojure and the pure Java implementations was much smaller than I'd thought. Quite amazing for a dynamically typed language to get so close to Java in performance, to be honest.
The ring-clojure performance was much lower than the Java equivalent, but that's expected because de-serialization can be A LOT faster when you use static types for guiding parsing (e.g. cache the string keys and never allocate object keys, maybe small values that are common as well). I wonder if Clojure contracts could be used for the same tricks.
When using a "mapper", the user tells the library up-front what objects will be used for holding the JSON data. You can do all kinds of optimizations if you know the shape of the data and what objects will need to be allocated in order to hold it, before the JSON actually arrives.
In my testing I couldn't tell the difference between Java and Clojure performance solving the same problem. Probably because Clojure is lazy in many cases (or the implementer can choose a lazy implementation) where Java has to be eager all the time.
The performance difference between Clojure and the pure Java implementations was much smaller than I'd thought. Quite amazing for a dynamically typed language to get so close to Java in performance, to be honest.
That's not surprising given that they rewrote all the middleware in Java and the Clojure part was just a very thin wrapper around vertx. And that's fine, it's what makes Clojure usable in production, but the vast majority of Clojure users bash Java all the time, take this libraries for granted and ignore the fact that a big range of software for the JVM is better written in Java than in Clojure.
I wonder if they meant to write these "key takeaways" in context of performance, or generally? Taken on their own they don't sound too familiar vs why people usually decide to go with Clojure or not.
* "Clojure frees developers from the perils of writing concurrent programs, but at a price."
* "When concurrency is not a factor, consider using Java."
From the first statement, the "price" refers to performance implications. So the second refers to that too and the article mostly talks about performance.
Pure FP generally scales well with caching/memoization, because functions are pure, and with concurrency, because data is immutable. However the paradigm typically allocates more garbage and often creates more indirection due to its declarative nature (think function composition/pipelines/laziness and so on). So it is not as suitable for fine-grained/low level optimizations as imperative programming.
Like dgb23 said the takeaways were in the context of performance.
It's also important to emphasize the part about only seeing performance degradation when the server was at full capacity.
If your service doesn't take your server to its edge, which is most often the case, then there really isn't any advantage writing parts in Java as far as performance is concerned.
This approach shines in those cases where you deal with very high throughput.
The performance difference between Clojure and the pure Java implementations was much smaller than I'd thought. Quite amazing for a dynamically typed language to get so close to Java in performance, to be honest.
The ring-clojure performance was much lower than the Java equivalent, but that's expected because de-serialization can be A LOT faster when you use static types for guiding parsing (e.g. cache the string keys and never allocate object keys, maybe small values that are common as well). I wonder if Clojure contracts could be used for the same tricks.