Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Startup times (especially for 'on demand' cloud workloads) are kind of the point of GraalVM. Effectively, it shifts optimisation to the compile phase. GraalVM build take much more time than classic Java. But they run a bit faster (on some workloads dramatically) and use less memory. It's no silver bullet for development, if you want fast turnaround after changing your code you want the classic JVM. GraalVM can help to cut your production load a bit (although Oracle seems to keep the heavy performance gains behind for their licensed GraalVM enterprise customers)


> But they run a bit faster (on some workloads dramatically)

That’s not true. For the majority of applications the JIT compiler will be much faster (either Graal’s JIT compiler or Hotspot). Startup time, and memory reduction is true though for AOT.


I haven't noticed compile times to be any worse when using GraalVM to build Java projects.

Caveat: I also haven't been using Native Images yet, though. So I can't comment on if it'll be dramatically different for that build target.


GraalVM is multiple projects and I feel there is often a bit of a mix-up around these:

GraalVM is first and foremost a JIT compiler written in Java that can be plugged into OpenJDK. Due to it being written in a higher level language than the original Hotspot compilers (written in C++) they are easier to write/maintain/experiment with. This mode of operation is used extensively by Twitter for example, because on their workloads it provides better performance than Hotspot, but the two trades blows in general. But this uses the standard javac compiler so it is basically just a slightly different JVM implementation.

Since a JIT compiler outputs machine code it can be “easily” modified to do so in an offline setting as well — this is Graal’s AOT/native compilation mode. This will take a long time compared to some other compilers (I don’t exactly know the reason for that, probably Java’s dynamic nature requiring more wide-reaching analysis?), but will have lower memory usage and faster startup speed compared to the traditional execution mode (but rarely better performance).

There is also Truffle, which turns “naive” language interpreters into efficient JIT compiled runtimes and allowing polyglot execution, which is a whole other dimension.


Wow, yes this definitely was not clear to me as a (longtime) user of GraalVM.

Thanks a lot @kaba0, big-O would be smart put your comment as part of the GraalVM site FAQ for "What is GraalVM".

Cheers.

EDIT: One request for a small clarification

> But this uses the standard javac compiler so it is basically just a slightly different JVM implementation.

What is "this"? Are you referring to TFA?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: