Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And benchmarks with rigor, I might add. Noting important dependencies that match chipset, bare metal CPU architecture/bus wiring and compiler choices.

In particular, I've spoken with people who have cited significant differences in performance, when using proprietary compilers matched to vendor CPU chipsets, for Linux/OpenJDK compilation, which then changes the profiling of the JVM benchmark, on the same bare metal crate.

Meanwhile, L2 cache misses, due to object reference churn, are the final deal breaker with the JVM, if I've read correctly and read enough about JVM performance. Beyond even garbage collection overhead, fundamental JVM performance cannot be tuned to optimize CPU cache use.

Briefly paraphrased, the JVM performs messaging/referencing with String references, that look like "java.lang.Object@%HASH_CODE%" and when you instantiate and garbage collect many instances (millions of those hellishly long fully qualified java names) across many operations, costing many cycles, you'll always suffer incurred latency, because most of your objects are frequently populated and then evicted from the L2 cache, no matter how long they actually live in the JVM heap.

I don't have the link handy, or I'd post it.

But long story short, even though a JVM benchmark may be performant, benchmarks do not peg CPU resources according to real life use cases.

And changing the OS, motherboard, and compiling from source on your own physical hardware can provide benefits that might not be obviously worth the extra effort to compile from source bootstrapping all the way from the kernel to the JVM, before deploying your java artifacts.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: