It’s time to take a look back at what went down in Finagle, Finatra and related libraries over the past few months and get an idea of what’s to come. This covers the April and June releases (apologies for missing May) as well as the upcoming 7.0.0 release planned for the next week or two. We plan to continue this review regularly, on a quarterly basis. You can start with a recap of what we were talking about last time.
Considerable effort at improving the throughput of Finagle services was undertaken and your CPU cores and garbage collectors have spent that extra time getting an early start on their summer beach reads. This work was broad based and intended to help the majority of Finagle users. Examples include our Tweet service which saw a 15% decrease in CPU time and a 13% decrease in allocations while our User service saw 16% and 6% respectively.
The work began with a suite of optimizations to Twitter Futures [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] (these ideas came courtesy of @flaviowbrasil).
Finagle’s memcached client got a tune up [1, 2, 3, 4, 5] and microbenchmarks show decoding times decreased by a factor of four.
The move to Netty 4 allows us to take advantage of more optimizations and internally we’ve toggled on buffer pooling and refcounting for ThriftMux control messages while the rollout of the edge-triggered native transport is in progress.
Our new load balancer, Deterministic Aperture, has begun early production usage. It is early days, though the initial results are promising and our goal is to promote it to Finagle’s default load balancer.
Scrooge work included allocation reductions for Scala and Java generated code. Investigations have begun to see if the generated code can be more modularized which will unlock  ResponseClassification on the server-side among other wins.
The emphasis on efficiency will continue this summer with a few bets we believe will payoff. The first is making ThriftMux+Scrooge operate directly on off-heap buffer representations, unlocking zero-copy payloads. Given the gains we’ve seen by leaving Mux control messages off-heap, we expect big gains. The second bet is changing Mux’s sessions to be “push based” instead of “pull”. This avoids the conversions back and forth from Netty’s push model and early prototyping has shown significant throughput improvements. Assuming the new push based model performs as expected, we plan to deliver similar changes for HTTP/2 and Memcached.
Transparently replacing HTTP/1.1 usage with HTTP/2 is underway. H2 gives you the resource reductions (a single multiplexed connection) and resiliency features (fast rolling restarts without a success rate drop) that services are already accustomed to with Mux.
While most of the spring was work on plumbing that you get for free, a couple of user APIs were added in Tunables and MethodBuilder.
There is a rich tradition of our interns landing incredibly useful functionality — TwitterServer’s admin pages UI, client-side nack admission control, and histograms details. This summer is no different with @McKardah working to wire up Twitter Futures into IDEA’s async stacktraces.
Converting logging in TwitterServer over to slf4j is a large change that is in progress.
A revamped set of APIs for SSL/TLS were shipped which is powering our mTLS implementation. SSL/TLS work continues with a goal of adding STARTTLSÂ support to Mux.
All of Finagle’s protocols have been migrated to Netty 4 (squeee!!!) and the work to rip Netty 3 completely out of Finagle is pretty far along. It has taken us years to get here and the benefits for efficiency and features like HTTP/2 are good indicators of why it was worth it.
Thanks for following along. Please feel free to ask questions on the mailing list about anything that is unclear and we’ll help clarify if you would like to know more.
Kevin Oliver and the Core Systems Libraries team