Despite being a long-time Java programmer, I had presumed that server-less “function-as-a-service” services like AWS Lambda would be the domain of more functional languages like NodeJS. Concerns like the latency of cold starts and the subsequent cost/benefit ratios of costlier just-in-time compilation optimizations seem to conceptually favor runtimes designed to work quickly on single invocations. And what runtime could be better designed for run-once than one designed for a web browser?

A recent article on InfoQ challenges this notion. The fundamental basis for this is that an AWS Lambda is not actually run-once, as the name might imply. Lambda services are still written as HTTP servers; they are simply lazy-launched and have a limited lifecycle not under the user’s control. Processes aren’t launched until a request arrives, but they aren’t destroyed after a single invocation. Rather, AWS decides how long to keep a given process running and decides when to kill it.

While the full details of the presentation aren’t yet online, John Chapin of Symphonia empirically tested a number of different scenarios with a benchmarking tool of his own design. Some of the results validated well-known behavior, where the CPU performance of an instance is controlled by the memory allocation requested. Some information is tantalizing, where Chapin collected data, over a 2 day run, of how frequently instances are restarted. It seems that on a per-invocation basis, 128MB instances were restarted 1.8% of the time, whereas 1.5GB instances were restarted 0.97%. The way that this type of probability should be interpreted would depend on the actual methodology of this test, so there’s definitely more information pending.

Once you have applications running in Java, the next logical question to consider is what kind of code will be performant in this environment. Performance monitoring tools are obviously part of the equation, but the JVM command-line arguments are also one of the fundamental parameters that influence application performance. Surprisingly, information about the exact command line arguments used to launch JVMs in this environment is not particularly forthcoming in the AWS documentation. Chapin’s article got me thinking about this, where he mentioned the “-XX:+TieredCompilation” argument, but when I searched the Internet for corroborating evidence not much turned up, so I had to try it for myself.

I whipped up a small custom function for this purposes, which iterates through InputArguments in the RuntimeMXBean:

JVM Performance 2

And got this result:

JVM Performance 3

which substantiates Chapin’s result. In particular, this means that the serial collector is enabled, and tiered compilation is disabled. The choice of the serial collector is interesting, since this is the collector that is optimized for single-processor CPUs or < 100MB heap. Tiered compilation is an option to help server VMs startup faster and achieve profiled optimizations more quickly. That this option is disabled may imply either some empirical performance ramifications that Amazon found, or perhaps some kind of impact on their own environment.

It also implies some key ramifications about the type of code that runs best in this environment and reinforces the lambda concept. With the serial GC, all GC’s are stop-the-world of undefined time, so requiring GCs at all in an application is undesirable. While a traditionally-written Java application that makes heavy use of heap allocations will run, those that are written to more aggressively constrain the amount of allocation and GC usage will likely better take advantage of JVM’s lifecycle in this environment.

 

James Kao is a VP of Product Management at CA Technologies

Leave a Reply