SPECjbb MultiJVM - Java Performance

Moving on from SPECCPU, we shift over to SPECjbb2015. SPECjbb is a from ground-up developed benchmark that aims to cover both Java performance and server-like workloads, from the SPEC website:

“The SPECjbb2015 benchmark is based on the usage model of a worldwide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases, and data-mining operations. It exercises Java 7 and higher features, using the latest data formats (XML), communication using compression, and secure messaging.

Performance metrics are provided for both pure throughput and critical throughput under service-level agreements (SLAs), with response times ranging from 10 to 100 milliseconds.”

The important thing to note here is that the workload is of a transactional nature that mostly works on the data-plane, between different Java virtual machines, and thus threads.

We’re using the MultiJVM test method where as all the benchmark components, meaning controller, server and client virtual machines are running on the same physical machine.

The JVM runtime we’re using is OpenJDK 15 on both x86 and Arm platforms, although not exactly the same sub-version, but closest we could get:

Altra system:

openjdk 15.0.1 2020-10-20
OpenJDK Runtime Environment 20.9 (build 15.0.1+9)
OpenJDK 64-Bit Server VM 20.9 (build 15.0.1+9, mixed mode, sharing)

EPYC & Xeon systems:

openjdk 15 2020-09-15
OpenJDK Runtime Environment (build 15+36-Ubuntu-1)
OpenJDK 64-Bit Server VM (build 15+36-Ubuntu-1, mixed mode, sharing)

Furthermore, we’re configuring SPECjbb’s runtime settings with the following configurables:

SPEC_OPTS_C="-Dspecjbb.group.count=$GROUP_COUNT -Dspecjbb.txi.pergroup.count=$TI_JVM_COUNT -Dspecjbb.forkjoin.workers=N -Dspecjbb.forkjoin.workers.Tier1=N -Dspecjbb.forkjoin.workers.Tier2=1 -Dspecjbb.forkjoin.workers.Tier3=16"

Where N=160 for 2S Altra test runs, N=80 for 1S Altra test runs, N=112 for 2S Xeon, N=56 for 1S Xeon, and N=128 for 2S and 1S on the EPYC system. I tried running 256 or 160 threads on the 2S EPYC configuration but the benchmark would error out with a critical timeout and I wasn’t able to fully debug as to why it did that.

In terms of JVM options, we’re limiting ourselves to bare-bone options to keep things simple and straightforward:

Altra & EPYC system:

JAVA_OPTS_C="-server -Xms2g -Xmx2g -Xmn1536m"
JAVA_OPTS_TI="-server -Xms2g -Xmx2g -Xmn1536m"
JAVA_OPTS_BE="-server -Xms48g -Xmx48g -Xmn42g -XX:+AlwaysPreTouch"

Xeon system:

JAVA_OPTS_C="-server -Xms2g -Xmx2g -Xmn1536m"
JAVA_OPTS_TI="-server -Xms2g -Xmx2g -Xmn1536m"
JAVA_OPTS_BE="-server -Xms172g -Xmx172g -Xmn156g -XX:+AlwaysPreTouch"

The reason the Xeon system is running a larger back-end heap is because we’re running a single NUMA node per socket, while for the Altra and EPYC we’re running four NUMA nodes per socket for maximised throughput, meaning for the 2S figures we have 8 backends running for the Altra and EPYC and 2 for the Xeon, and naturally half of those numbers for the 1S benchmarks. The back-ends and transaction injectors are affinitised to their local NUMA node with numactl –cpunodebind and –membind, while the controller is called with –interleave=all.

The max-jOPS and critical-jOPS result figures are defined as follows:

"The max-jOPS is the last successful injection rate before the first failing injection rate where the reattempt also fails. For example, if during the RT-curve phase the injection rate of 80000 passes, but the next injection rate of 90000 fails on two successive attempts, then the max-jOPS would be 80000."

"The overall critical-jOPS is computed by taking the geomean of the individual critical-jOPS computed at these five SLA points, namely:

      • Critical-jOPSoverall = Geo-mean of (critical-jOPS@ 10ms, 25ms, 50ms, 75ms and 100ms response time SLAs)

During the RT curve building phase the Transaction Injector measures the 99th percentile response times at each step level for all the requests (see section 9) that are considered in the metrics computations. It then computes the Critical-jOPS for each of the above five SLA points using the following formula:
(first * nOver + last * nUnder) / (nOver + nUnder) "


That’s a lot of technicalities to explain an admittedly complex benchmark, but the gist of it is that max-jOPS represents the maximum transaction throughput of a system until further requests fail, and critical-jOPS is an aggregate geomean transaction throughput within several levels of guaranteed response times, essentially different levels of quality of service.

Beyond the result figures, the benchmark keeps detailed track of timings of responses and tracks a few important statistical data-points across a response-time curve, as follows:


2S EPYC 7742 THP Enabled

I’m starting off with the EPYC results as they’re sort of standard – the max-jOPS here ends up quite high at over 270k, while the critical-jOPS ends up around 125k. The system still manages to retain 90th percentile response times under 20ms up until 230k which is excellent, with 99th percentile results starting to degrade after 110k jOPS.


2S Xeon 8280 THP Enabled

On the Xeon system, we see similar flat 90th percentile response times up until around 120k with 99th percentiles starting to degrade following 90k, but in a much tighter curve than on the EPYC system – while the system here has less overall throughput its scaling up to that throughput limit could be considered to be better.


2S Altra Q80-33 THP Enabled

With the EPYC and Xeon systems as context, we’re finally looking at the Altra results, which look very different.

Unlike the x86 systems, 99th and 90th percentile response times degrade earlier on in the throughput curve for the Altra chip. What this actually reminded me of is the STREAM results from earlier in the review where we saw that initially a bunch of cores were able to hit peak bandwidth across the memory controllers, but adding further cores to the mix actually degraded performance, pointing out to suboptimal congestion across the mesh interconnect.

It might be possible that the results here across SPECjbb are hitting a similar level of saturation under load, given that there’s a lot of inter-core communication and memory transactions happening.

SPECjbb2015-MultiJVM max-jOPS

Charting the max-jOPS of the different systems, I ran figures for both 1S and 2S system configurations. Additionally, I also tested out the benchmark both with transparent huge pages always enabled, and to a default not used / madvise state, as we’ve seen in the past that this can have a notable impact on the resulting performance.

Whilst the Altra system is able to beat the Xeon, it’s not sufficient to match the EPYC system which still lies considerably ahead by a good margin. The exact reasons for this discrepancy compared to the x86 systems isn’t immediately clear, as we’re dealing with many layers here. AArch64 OpenJDK JVM performance certainly might not be as mature and optimised as the x86-64 counterparts, and there is certainly a rabbit hole of various optimisations and knobs we could have tried to change things – although we still view these simple default out-of-the-box settings to still be valuable and valid in terms of comparisons.

One thing that did come to mind immediately when I saw the results was SMT. Due to this being a transactional data-plane resident type of workload, SMT will undoubtedly help a lot in terms of performance, so I tested out the EPYC chip figures with SMT disabled, and indeed max-jOPS went down to 209.5k for the 2S THP enabled results, meaning that SMT accounts for a 29.7% performance benefit in this benchmark.

A further indication that the Altra system is being underutilised on the part of the cores and memory-bottlenecked is its power consumption, which even when fully loaded in the RT curve, it generally hovered around 170-180W per socket, while the x86 systems were filling up their TDPs.

It’s generally these kinds of workloads that SMT works best on, and that’s why IBM can deploy SMT4 or SMT8 processors, and the type of workloads Marvell’s ThunderX was trying to carve a niche or itself with SMT4.

SPECjbb2015-MultiJVM critical-jOPS

For the critical-jOPS figures, the Altra doesn’t do well at all given its response-time curve. Beyond the lack of SMT (The EPYC here again achieves its high score through a 26.4% contribution of the secondary logical cores), we’re maybe looking at a software side immaturity of out-of-the-box Java performance on Arm systems. The figures here shouldn’t be taken with absolute authority with a conclusion that Java performance on the Altra sucks, but at least we’re seeing signs that it doesn’t look too great.

SPEC - Multi-Threaded Performance Compiling LLVM, NAMD Performance
Comments Locked

148 Comments

View All Comments

  • Wilco1 - Monday, December 21, 2020 - link

    Using Zen 2 is not correct since it uses much larger transistors. Using Kirin 990 5G density gives an estimate of 330mm^2 for Graviton 2. The size of N1 cores has been published for 7nm, so we know it is 1.4mm^2. You're right that PCIe lanes would add to it as well - assuming the PHYs have the same size as DDR PHYs at the same speed, 64 lanes would be about 12-15mm^2. That would increase it to about 365mm^2.
  • milli - Monday, December 21, 2020 - link

    Kirin 990 5G uses N7+. Altra uses N7.
    Not only is the process different but they're also totally different categories of products concerning transistor density. A mobile SOC can be very dense. It barely has any IO (which is not transistor dense). Also GPU, DPU, IMG, ... all are extremely dense.
    Kirin 990 5G is 90MTr/mm^2.
    No way a server class SOC is going to be more than 60MTr/mm2.
    Renoir = 62, Navi 21 = 52, Zen2 = 54, Vega 20 = 40, Navi 10 = 41.
    Ampere isn't going to magically break 60.

    "The size of N1 cores has been published for 7nm, so we know it is 1.4mm^2"
    Those are ARM numbers and that is only if you use high density libraries.
  • Wilco1 - Monday, December 21, 2020 - link

    Arm servers don't need high performance libraries - even mobile phones clock over 3 GHz using high density libraries. See https://images.anandtech.com/doci/13959/03_Infra%2... (note 3.1GHz and 1.4mm^2 with 1MB L2 on 7nm is ~100MT/mm^2)

    Using ~90MT/mm^2 for 7nm is reasonable since that is the reported density of recent 7nm chips (Kirin 990 5G is 91, 4G is 88 - the older 980 gets 93). Mobile SoCs already have a large amount of IO and analog logic and we are multiplying that amount by 3x.

    This shows how stupid it is to use high performance libraries in server chips - they don't need to run at 5GHz!
  • milli - Monday, December 21, 2020 - link

    We have different opinions but there's only one true fact: the die size is not disclosed. So anything anyone says is just a pure guess. You can't throw it around as fact.
  • milli - Monday, December 21, 2020 - link

    Navi 10/20 chips run at < 2Ghz and are 40MTr/mm. Just because Altra runs at 3.3Ghz, doesn't mean that it doesn't use HPL.
  • Josh128 - Friday, December 18, 2020 - link

    Exactly-- no way in hell this thing is just 350mm^2. The package is huge. Why would a 350mm^2 die need such a giant package?
  • Wilco1 - Friday, December 18, 2020 - link

    The package is only 16% larger than EPYC. Do you see any opportunity to reduce the huge number of pins? There are 8 memory channels plus full 128 PCIe lanes.
  • mode_13h - Sunday, December 20, 2020 - link

    Yes, the problem Altra Max will likely face is more memory bottlenecks. Also, I wonder if they'll have to dial clocks down, a little, to keep the power-efficiency numbers attractive.
  • Wilco1 - Monday, December 21, 2020 - link

    Altra Max drops max frequency to 3GHz, but it's not clear whether the TDP stays the same.
  • Gondalf - Friday, December 18, 2020 - link

    Are you sure :). Come on

Log in

Don't have an account? Sign up now